-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: add kube-vip toleration to match node taint in harvester-cloud-provider #202
Conversation
If I need to bump version, that means I need to open another PR to release branch? |
Can we skip the tarball? I believe the content is the same, just timestamp is different. |
@bk201 Actually, the tarball is different from previous version. You could extract the tar to compare. Only difference is the DaemonSet toleration. |
Thanks, If the tarball content is different, should we bump its version? |
@bk201 Yeah, after checking, I think we need to bump its version. Okay, it uses chart-releaser-action to release. |
BTW, it seems I need to open another PR to release branch for bumping the harvester-cloud-provider version to 0.2.3, right? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Let's try overriding kube-vip's toleration in cloud-provider's value file:
With this way, I guess we don't need to bump the dependency chart. |
…rovider Signed-off-by: Jack Yu <[email protected]>
@bk201 After re-run template with only modifying the value.yaml of parents, and it seems well. And test case is successful. |
override references: Helm | Subcharts and Global Values |
Just planned to fix this issue, but #202 and #203 have already been ready for merge. thanks @Yu-Jack @bk201 Let's merge those 2 PR, then I will rebase rancher/charts#3316 with v0.2.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks.
Description
After setting up guest cluster with this configuration (control-plane + ETCD separate from worker in different pool) mentioned in Test Plan.
When you create load balancer type service with harvester cloud provider annotation, the load balancer type service
EXTERNAL_IP
is stuck in pending statusAfter discussion in harvester/harvester#4678, the reason is
kube-vip
toleration doesn't matched any node taint.Solution
Add one more toleration in
kube-vip
DaemonSet. It should works in all cases listed in Test Plan.Bump PR is #203
Issue
harvester/harvester#4678
Test Plan
Common set up
Use
https://charts.harvesterhci.io
to install and select 0.2.3.Different Guest Cluster Pool
After install harvester-cloud-provider, DaemonSet kube-system/kube-vip should run in above three cases in guest cluster. Then create following load balance type service to check
EXTERNAL_IP
is assigned or not.p.s. check
helloworldserver-lb
service in the guest cluster, GUI and command are both okay.