-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metric tekton_pipelines_controller_pipelinerun_duration_seconds suddenly stops reporting #7902
Comments
For how long do we need to run the controller to reproduce? Is this with the latest package? Is it possible that the metrics endpoint goes down? |
Based on our data, this is always reproducible when looking at a controller lifetime of 12h. The controller pod however has no container restarts within the business day. We do however restart the pipelinerun controller once a day such that the 10h full reconciliation loop falls outside the business hours. I could verify that other metrics have a normal timeline, so the metrics endpoint should not go down. Since we are using the latest operator version atm. (exact versions are part of the original post) and I could not
...I could not easily test if this also happens with the absolutely latest released image version (pipelinerun controller v0.58.0), |
@gerrnot I am not able to reproduce this in our cluster. |
@khrm This occurs only for specific metrics (e.g.
PS: the metric that you provided is also continuous on our system. |
@gerrnot Are you using OpenShift? Or plain Kubernetes? |
I believe I have identified the problem. I need to confirm. |
@khrm Thanks a lot for investigating this. We use vanilla kubernetes (on-prem kubespray cluster). |
Expected Behavior
We expect the metric
tekton_pipelines_controller_pipelinerun_duration_seconds
to always (consistently, e.g. for every single scrape request) report the value for every single PipelineRun as long as the PipelineRun exists in k8s when using thelastvalue
setting (see provided config at end).Actual Behavior
While part of the initial scrapes, the values disappear over time. For example a PipelineRun that was started in the morning yields metrics for several hours, but then after a certain point in time, it yields no more metrics (verified by checking the /metrics endpoint of the pipelines-controller - default port 9090).
A picture says more than a thousand words:
When the metrics are visualized in prometheus (picture above) you would believe that during the gap in the middle - for a duration of around 30 minutes, there was no PipelineRun in the cluster. This is not true! There were plenty, it is just that they are no longer contained in the metrics output.
Steps to Reproduce the Problem
lastvalue
setting (like in the example provided at the bottom of this post)If you followed step 2. you essentially just need to look at the graph from prometheus.
If you find a gap like in the picture above, the issue is reproduced. (Clarification: It looks like a gap, it is not an actual gap, since an actual gap would mean the same time series is continued later, which it is not - those are new PipelineRuns - new time series!)
Otherwise (not using prometheus), the procedure is: for each pipelinerun in k8s, check if it is also part of the latest scrape.
If an instance is found that is in k8s, but not in the metrics output, the problem is already reproduced.
Additional Info
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
Output of
tkn version
orkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
We used the following tekton operator settings on the
pipeline
:The text was updated successfully, but these errors were encountered: