-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happens if cpu quota on k8s side is less than 1 core? #54
Comments
I'm current exploring exactly this case:
My current observations go as following: with this setup, the default value of
Since CFS quota is calculated across all threads of the app, in the worse case, the app with With that, in case where CPU limit is lower than the number of host's CPU cores, |
Yes, from what I've seen, any fractional quotas don't play well with Go as it attempts to consume an integer amount of cores (even if the application doesn't consume CPU, eventually the GC will consume available CPU up to GOMAXPROCS). This can and does lead to throttling. With a 0.5 quota and GOMAXPROCS=1, the process could end up consuming all of the available quota in the first half of the slice, so it's possible to get throttled for the cfs_period (default 100ms, hence 50ms throttling as @narqo mentioned above). |
Okay, thanks for the confirmation. We ended up with at least providing one core per replica to avoid above situation. |
@timo-klarshift When you say cpu quota, which one do you mean in k8s: |
I expect this was about |
Gotcha, I tested it myself and it should be |
|
Am i right that if you have a service with a cpu limit less than 1000m (eg 1 core) the go process would still think it has one core available and tries to utilize more than its limit to get throttled eventually. So my understanding here is that if you do use limits less than one core is a very unideal situation for the go-scheduler to work efficently:
a.) 4 replicas with 500m cpu limit
b.) 2 replicas with 1000m cpu limit
In total both cases use the same amount of total cores (2) but case b.) would be more efficient as the go scheduler knows how much it can utilize?
Sorry that i created a bug ticket for my simple question. But i think if my assumptions are correct, it would be good to make this clear in the README.
Thanks for the awesome library 👍
The text was updated successfully, but these errors were encountered: