-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable the vhost backend for the virtio-net interfaces in the KVM hypervisor #4438
Conversation
I like this from the doc:
So, we already know when to release it =) |
For now it is just a placeholder, but this is my prediction :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool!
Probably, I would explicitly mention in the documentation that it will not affect the Legacy HVM mode (as it works only with virtio front devices).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…ervisor Using vhost as the backend for virtio-net in a QEMU/KVM setup bypasses QEMU's user-space network packet processing by moving packet handling directly into the Linux kernel. Normally, QEMU would process network I/O in user space, which incurs significant CPU overhead and latency due to frequent context switching between user space (QEMU) and kernel space. With vhost, packet processing is handled by a dedicated kernel thread, avoiding QEMU for most networking tasks. This direct kernel handling minimizes the need for QEMU’s intervention, resulting in lower latency, higher throughput, and better CPU efficiency for network-intensive applications running on virtual machines. Using vhost is a clear choice with no downsides, as it is already included in eve-kernel and does not increase the EVE image size. Reducing QEMU overhead is especially important for EVE, where we enforce cgroup CPU quotas to limit an application to using no more than N CPUs at a time, with N being the number of vCPUs assigned to the app in its configuration (see pkg/pillar/containerd/oci.go, method UpdateFromDomain()). These CPU quotas apply to both the application and QEMU itself, so removing QEMU from packet processing is essential to prevent it from consuming CPU cycles needed by the application. Signed-off-by: Milan Lenco <[email protected]>
Will do. Actually good point that there is no effect for e1000 driver so it makes sense to exclude vhost from the qemu config in this case - done (we have one unit test for LEGACY mode which will now check that vhost=on is not present). |
That's even better, thanks! |
Using vhost as the backend for virtio-net in a QEMU/KVM setup bypasses QEMU's user-space network packet processing by moving packet handling directly into the Linux kernel. Normally, QEMU would process network I/O in user space, which incurs significant CPU overhead and latency due to frequent context switching between user space (QEMU) and kernel space. With vhost, packet processing is handled by a dedicated kernel thread, avoiding QEMU for most networking tasks. This direct kernel handling minimizes the need for QEMU’s intervention, resulting in lower latency, higher throughput, and better CPU efficiency for network-intensive applications running on virtual machines.
Using vhost is a clear choice with no downsides, as it is already included in eve-kernel and does not increase the EVE image size. Reducing QEMU overhead is especially important for EVE, where we enforce cgroup CPU quotas to limit application to using no more than N CPUs at a time, with N being the number of vCPUs assigned to the app in its configuration (see here). These CPU quotas apply to both the application and QEMU itself, so removing QEMU from packet processing is essential to prevent it from consuming CPU cycles needed by the application.
This PR is part of a series of network performance optimizations coming into EVE, see documentation (will be submitted in a separate PR)