Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test/e2e: Run in parallel #76

Merged
merged 3 commits into from
Jan 9, 2025
Merged

test/e2e: Run in parallel #76

merged 3 commits into from
Jan 9, 2025

Conversation

RamLavi
Copy link
Contributor

@RamLavi RamLavi commented Nov 21, 2024

What this PR does / why we need it:
This PR is running the e2e tests in parallel, as tests should be independent of each other.
The amount of threads depends on the number of CPUs available in the machine running the CI.
Running it on the PR should increase CI time, and potentially expose bugs.

e2e run example: E2E_TEST_ARGS='--focus="should keep ips after live migration"' make test-e2e

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Release note:

NONE

@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label Nov 21, 2024
@RamLavi RamLavi changed the title spike: Run e2e tests in paralel [DNM] spike: Run e2e tests in paralel Nov 21, 2024
@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 17, 2024
@kubevirt-bot kubevirt-bot added size/XXL and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S labels Dec 18, 2024
@RamLavi
Copy link
Contributor Author

RamLavi commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

@maiqueb
Copy link
Collaborator

maiqueb commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

It could be.

You'll need to check the logs of the ovnkube-control plane and ensure that for that pod it failed to find the primary UDN network to be sure.

@RamLavi
Copy link
Contributor Author

RamLavi commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

It could be.

You'll need to check the logs of the ovnkube-control plane and ensure that for that pod it failed to find the primary UDN network to be sure.

I don't see any pod fails, but are you sure we should find a "pod fail"? I mean, it did find a primary network, it was just the wrong one.. In any case, this is very helpful, I've also asked on the race bug to know more information..

@maiqueb
Copy link
Collaborator

maiqueb commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

It could be.
You'll need to check the logs of the ovnkube-control plane and ensure that for that pod it failed to find the primary UDN network to be sure.

I don't see any pod fails, but are you sure we should find a "pod fail"? I mean, it did find a primary network, it was just the wrong one.. In any case, this is very helpful, I've also asked on the race bug to know more information..

So that's what you need to find: when it returns the active network for the namespace, which one did it find ?

It should have found the primary UDN, not the default one. Can you print the relevant log snippet ?

@RamLavi
Copy link
Contributor Author

RamLavi commented Dec 31, 2024

After some digging on OVNK, I realized the network is being reconciled. The reason is that the network name in the tests is the same. In order to separate the test and run them in parallel - I changed the network names to be unique.

@RamLavi RamLavi changed the title [DNM] spike: Run e2e tests in paralel test/e2e: Run in paralel Dec 31, 2024
@RamLavi RamLavi changed the title test/e2e: Run in paralel test/e2e: Run in parallel Dec 31, 2024
@RamLavi RamLavi force-pushed the parallel branch 2 times, most recently from 8e794c9 to 127686b Compare January 6, 2025 07:01
Copy link
Collaborator

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems OK to me, but I wonder if / how we can still indicate which tests we want to run.

Makefile Outdated
export KUBECONFIG=${KUBECONFIG} && \
export PATH=$$(pwd)/.output/ovn-kubernetes/bin:$${PATH} && \
export REPORT_PATH=$$(pwd)/.output/ && \
cd test/e2e && \
go test -test.v --ginkgo.v --test.timeout=${E2E_TEST_TIMEOUT} ${E2E_TEST_ARGS} --ginkgo.junit-report=$${REPORT_PATH}/test-e2e.junit.xml
$(GINKGO) -v --timeout=${E2E_TEST_TIMEOUT} --junit-report=$${REPORT_PATH}/test-e2e.junit.xml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how will the user specify a test subset to run ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I re-introduced the E2E_TEST_ARGS flag.
DONE

@RamLavi
Copy link
Contributor Author

RamLavi commented Jan 8, 2025

Change: re-introduced the E2E_TEST_ARGS flag.

Moving to use ginkgo tool instead of go test.
This is done in order to use the parallel ginkgo parameter that is not
supported on "go test" tool.

Signed-off-by: Ram Lavi <[email protected]>
The e2e tests currently randomize the namespaces, but
the network name inside the NADs is the same, creating
dependency between the tests.
In order to runthe tests in parallel, a change that will
be introduced in future commit - randomizing the network
names as well.

Signed-off-by: Ram Lavi <[email protected]>
@RamLavi
Copy link
Contributor Author

RamLavi commented Jan 8, 2025

Change: Rebase

Copy link
Collaborator

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RamLavi did you check this does the trick locally ?

No way it shouldn't afaict, but still, please paste an example of how to invoke it both in the appropriate commit msg and in the PR's description.

Thank you !

@RamLavi
Copy link
Contributor Author

RamLavi commented Jan 8, 2025

@RamLavi did you check this does the trick locally ?

I run it on zeus33. good enough right?

No way it shouldn't afaict, but still, please paste an example of how to invoke it both in the appropriate commit msg and in the PR's description.

The usage is the same: make test-e2e. Should I mention it?

The way it works is the that when the parallel flag is set, it auto-finds how many threads it can open, and runs with it. On Zeus33 - it's 10+ threads, so the CI takes ~3m. On github CI, it's more like ~10m.

@maiqueb
Copy link
Collaborator

maiqueb commented Jan 8, 2025

@RamLavi did you check this does the trick locally ?

I run it on zeus33. good enough right?

No way it shouldn't afaict, but still, please paste an example of how to invoke it both in the appropriate commit msg and in the PR's description.

The usage is the same: make test-e2e. Should I mention it?

I mean regarding the flag. If you want to filter some tests, do you need to use a different flag ? Can you check locally if you need to ? I have the slight recollection it isn't exactly the same command. BUT I might be wrong.

@qinqon could you chime in here ? Thanks in advance.

@RamLavi
Copy link
Contributor Author

RamLavi commented Jan 8, 2025

@RamLavi did you check this does the trick locally ?

I run it on zeus33. good enough right?

No way it shouldn't afaict, but still, please paste an example of how to invoke it both in the appropriate commit msg and in the PR's description.

The usage is the same: make test-e2e. Should I mention it?

I mean regarding the flag. If you want to filter some tests, do you need to use a different flag ? Can you check locally if you need to ? I have the slight recollection it isn't exactly the same command. BUT I might be wrong.

@qinqon could you chime in here ? Thanks in advance.

Using the general E2E_TEST_ARGS env var allows us to add whatever flags we wish. For ginkgo the relevant flags are --skip and --focus. I tried focus and it works:

# E2E_TEST_ARGS='--focus="should keep ips after live migration"' make test-e2e
export KUBECONFIG=/root/github.com/kubevirt/ipam-extensions/.output/kubeconfig && \
export PATH=$(pwd)/.output/ovn-kubernetes/bin:${PATH} && \
export REPORT_PATH=$(pwd)/.output/ && \
cd test/e2e && \
/root/github.com/kubevirt/ipam-extensions/bin/ginkgo-v2.22.0 -v --timeout="1h" --junit-report=${REPORT_PATH}/test-e2e.junit.xml --focus='"should keep ips after live migration"'
...
Will run 4 of 20 specs
...

but that's not a new env var.

@maiqueb
Copy link
Collaborator

maiqueb commented Jan 8, 2025

@RamLavi did you check this does the trick locally ?

I run it on zeus33. good enough right?

No way it shouldn't afaict, but still, please paste an example of how to invoke it both in the appropriate commit msg and in the PR's description.

The usage is the same: make test-e2e. Should I mention it?

I mean regarding the flag. If you want to filter some tests, do you need to use a different flag ? Can you check locally if you need to ? I have the slight recollection it isn't exactly the same command. BUT I might be wrong.
@qinqon could you chime in here ? Thanks in advance.

Using the general E2E_TEST_ARGS env var allows us to add whatever flags we wish. For ginkgo the relevant flags are --skip and --focus. I tried focus and it works:

# E2E_TEST_ARGS='--focus="should keep ips after live migration"' make test-e2e
export KUBECONFIG=/root/github.com/kubevirt/ipam-extensions/.output/kubeconfig && \
export PATH=$(pwd)/.output/ovn-kubernetes/bin:${PATH} && \
export REPORT_PATH=$(pwd)/.output/ && \
cd test/e2e && \
/root/github.com/kubevirt/ipam-extensions/bin/ginkgo-v2.22.0 -v --timeout="1h" --junit-report=${REPORT_PATH}/test-e2e.junit.xml --focus='"should keep ips after live migration"'
...
Will run 4 of 20 specs
...

but that's not a new env var.

Sure, but for us to keep it's current behavior, the value in it must now be different.

All good. Could you add this E2E_TEST_ARGS='--focus="should keep ips after live migration"' make test-e2e cmd to the PR description and the git commit msg ?

Then, lgtm / approve / merge :)

In order to manage failed tests artifacts, the process# is added to the
log file name.

e2e run example: `E2E_TEST_ARGS='--focus="should keep ips after live migration"' make test-e2e`

Signed-off-by: Ram Lavi <[email protected]>
@RamLavi
Copy link
Contributor Author

RamLavi commented Jan 9, 2025

Change: Updated commit desc

Copy link
Collaborator

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

Thank you.

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 9, 2025
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: maiqueb

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 9, 2025
@maiqueb
Copy link
Collaborator

maiqueb commented Jan 9, 2025

/hold

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 9, 2025
@RamLavi
Copy link
Contributor Author

RamLavi commented Jan 9, 2025

/hold cancel

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 9, 2025
@kubevirt-bot kubevirt-bot merged commit e450c8e into kubevirt:main Jan 9, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants