-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster-controller restarts when trying cluster turndown. #2752
Comments
Thank you for reporting this @mcclane. @cliffcolvin can you please take a look and get this triaged? |
I also tried this with version 2.0.2 and hand the same experience but no error on the cluster-controller |
@cliffcolvin do you want to move this to the proper Kubecost Issues repository? |
@mmclane thanks for this detail. we will try and reproduce.
|
I got valid JSON, a big blob of it. |
I don't know if this is related but when I try to create a turndown schedule via the UI it fails. I just tried to create one just now. Its currently 10:13, I selected a start time for 10:15 with an end time of 11. I get no error when I click apply but the schedule never shows up. If run kubectl get tds I see the state is ScheduleFailed. If I describe it however it says it successfully scheduled the turndown. It doesn't say why it failed. If I delete it and create a new one via the following manifest file it is scheduled successfully.
The schedule shows up in the UI, but shows the wrong configuration. In the UI it says it will Repeat: Daily but you can see in the manifest file that it was not set that way. If I describe the job it shows the following and says Repeat: none.
|
Note the schedule no longer shows in the UI once it starts. |
@AjayTripathy @jessegoodier @kwombach12 any updates on this issue or kubecost/cluster-turndown#77? |
We are actively investigating this issue! We are trying to understand why we seem to be getting a timeout.... |
I'm having a similar issue with "kubecost-cluster-controller" Kubecost version 2.2.2 (multicluster)
|
Describe the bug
Yesterday I installed kubecost so that I could play with the new cluster turndown feature. I have it installed and setup including the cluster-controller. I then created a TDS and that got scheduled successfully. When the start time hit I watch as it successfully created the cluster-turndown node group and saw the new node get added. But that is as far as it gets. Looking at the logs on the cluster-controller I see errors and restarts.
I am running this on an EKS cluster.
To Reproduce
Steps to reproduce the behavior:
cluster-controller-service-key secret as described int he documentation
Expected behavior
The cluster should be turned down
Screenshots
See error above.
Which version of OpenCost are you using?
You can find the version from the container's startup logging or from the bottom of the page in the UI.
Helm Chart v2.2.4
Additional context
Add any other context about the problem here. Kubernetes versions and which public clouds you are working with are especially important.
The text was updated successfully, but these errors were encountered: