-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add kep md #845
base: master
Are you sure you want to change the base?
feat: add kep md #845
Conversation
Signed-off-by: LY-today <[email protected]>
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: LY-today The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @LY-today. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
✅ Deploy Preview for kubernetes-sigs-scheduler-plugins canceled.
|
@googs1025 KEP |
Signed-off-by: LY-today <[email protected]>
@googs1025 Can you help advance this MR? |
Thanks for the invite, I'll handle this on weekend :) |
thank you for your time |
@@ -0,0 +1,114 @@ | |||
# Node Resource Fit plus Scheduling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems very similar to an old plugin. Can you help to tell the difference or integrate it?
FYI: https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/48-node-resources-allocatable-scoring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 Older version policies can only use one policy for different resources. Not suitable for complex resource scenarios, such as AI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 Under the AI cluster. It is hoped that GPU tasks will be scheduled on one GPU machine as much as possible, and CPU tasks will be scattered on CPU machines. However, the old version of the policy does not support using different policies for the two resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned, it seems to be very similar to the previous nodeResourcesAllocatable, and I don't think it needs to be extended with a new plugin. If it is possible, can it be integrated into the original plugin? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 What you mean is that you agree with the design of the NodeResourcesFitPlus strategy, but you want to implement it by modifying the original nodeResourcesAllocatable strategy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 What you mean is that you agree with the design of the NodeResourcesFitPlus strategy, but you want to implement it by modifying the original nodeResourcesAllocatable strategy?
@googs1025 Do I understand correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not recommended to use screenshots of tables and pictures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to make adjustments
|
||
## Summary | ||
|
||
The NodeResourcesFit plug-in of native k8s can only adopt a type of strategy for all resources, such as MostRequestedPriority and LeastRequestedPriority. However, in industrial practice, this design does not apply to some scenarios. For example: In AI scenarios, businesses that apply for GPUs prefer to occupy the entire GPU machine first to prevent GPU fragmentation; businesses that apply for CPU & MEM are prioritized and dispersed to non-GPU machines to prevent excessive consumption of CPU & MEM on GPU machines, resulting in real tasks of applying for GPUs. Pending due to insufficient non-GPU resources |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the scenario of using gpu nodes, which are scarce resources, we should directly filter out the gpu nodes. Shouldn't this reduce the score? In addition, IIUC, gpu nodes (or other devices) are labeled (based on gpu-operator or nfd), and are generally filtered in this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Affinity strategies or nodeSelector require labeling nodes in advance, which is costly for cluster maintainers. The advantage of the strategy is to reduce this maintenance operation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think quite the opposite, we should have provided feature tags for device-specific nodes (eg: nvidia.com/gpu.xxx). 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understand, labels can indeed be printed to distinguish machine types. Of course, it can also be done using the Affinity strategy. But what I want to say is that the above process has costs at the industrial practice level. 100 heterogeneous resources require the maintenance cost of 100 sets of labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 If you think that maintenance cost is not something that k8s needs to consider, then indeed the second expansion strategy does not need to be incorporated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 Does the ScarceResourceAvoidance strategy have a clear conclusion? Accept or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not for me to decide and can be left to other maintainers to suggest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@googs1025 Thank you for your feedback. Can you help me let other students review it?
@googs1025 Hello, do you have any clear plans for these two plugins? |
@swatisehgal @zwpaper Please check |
What would you like to be added?
What is your proposal:
The NodeResourcesFit plug-in of native k8s can only adopt a type of strategy for all resources, such as MostRequestedPriority and LeastRequestedPriority. However, in industrial practice, this design does not apply to some scenarios. For example: In AI scenarios, businesses that apply for GPUs prefer to occupy the entire GPU machine first to prevent GPU fragmentation; businesses that apply for CPU & MEM are prioritized and dispersed to non-GPU machines to prevent excessive consumption of CPU & MEM on GPU machines, resulting in real tasks of applying for GPUs. Pending due to insufficient non-GPU resources
. It is therefore hoped that both strategies can be extended to address this business need.
Why is this needed:
There are related descriptions above
Is there a suggested solution, if so, please add it:
plugin-one
config:
config description:
node score:
plugin-two
config:
config description:
node score:
Why is this needed?
It’s introduced above