Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable the vhost backend for the virtio-net interfaces in the KVM hypervisor #4438

Merged
merged 1 commit into from
Nov 17, 2024

Conversation

milan-zededa
Copy link
Contributor

Using vhost as the backend for virtio-net in a QEMU/KVM setup bypasses QEMU's user-space network packet processing by moving packet handling directly into the Linux kernel. Normally, QEMU would process network I/O in user space, which incurs significant CPU overhead and latency due to frequent context switching between user space (QEMU) and kernel space. With vhost, packet processing is handled by a dedicated kernel thread, avoiding QEMU for most networking tasks. This direct kernel handling minimizes the need for QEMU’s intervention, resulting in lower latency, higher throughput, and better CPU efficiency for network-intensive applications running on virtual machines.

Using vhost is a clear choice with no downsides, as it is already included in eve-kernel and does not increase the EVE image size. Reducing QEMU overhead is especially important for EVE, where we enforce cgroup CPU quotas to limit application to using no more than N CPUs at a time, with N being the number of vCPUs assigned to the app in its configuration (see here). These CPU quotas apply to both the application and QEMU itself, so removing QEMU from packet processing is essential to prevent it from consuming CPU cycles needed by the application.

This PR is part of a series of network performance optimizations coming into EVE, see documentation (will be submitted in a separate PR)

@OhmSpectator
Copy link
Member

I like this from the doc:

Since version 14.0.0, EVE-OS has enabled the vhost backend

So, we already know when to release it =)

@milan-zededa
Copy link
Contributor Author

I like this from the doc:

Since version 14.0.0, EVE-OS has enabled the vhost backend

So, we already know when to release it =)

For now it is just a placeholder, but this is my prediction :)

Copy link
Member

@OhmSpectator OhmSpectator left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!
Probably, I would explicitly mention in the documentation that it will not affect the Legacy HVM mode (as it works only with virtio front devices).

Copy link
Contributor

@rene rene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

…ervisor

Using vhost as the backend for virtio-net in a QEMU/KVM setup bypasses QEMU's
user-space network packet processing by moving packet handling directly into
the Linux kernel. Normally, QEMU would process network I/O in user space,
which incurs significant CPU overhead and latency due to frequent context
switching between user space (QEMU) and kernel space. With vhost, packet
processing is handled by a dedicated kernel thread, avoiding QEMU for most
networking tasks. This direct kernel handling minimizes the need for QEMU’s
intervention, resulting in lower latency, higher throughput, and better
CPU efficiency for network-intensive applications running on virtual machines.

Using vhost is a clear choice with no downsides, as it is already included
in eve-kernel and does not increase the EVE image size. Reducing QEMU overhead
is especially important for EVE, where we enforce cgroup CPU quotas to limit
an application to using no more than N CPUs at a time, with N being the number
of vCPUs assigned to the app in its configuration (see pkg/pillar/containerd/oci.go,
method UpdateFromDomain()). These CPU quotas apply to both the application and
QEMU itself, so removing QEMU from packet processing is essential to prevent
it from consuming CPU cycles needed by the application.

Signed-off-by: Milan Lenco <[email protected]>
@milan-zededa
Copy link
Contributor Author

Cool! Probably, I would explicitly mention in the documentation that it will not affect the Legacy HVM mode (as it works only with virtio front devices).

Will do. Actually good point that there is no effect for e1000 driver so it makes sense to exclude vhost from the qemu config in this case - done (we have one unit test for LEGACY mode which will now check that vhost=on is not present).

@OhmSpectator
Copy link
Member

Cool! Probably, I would explicitly mention in the documentation that it will not affect the Legacy HVM mode (as it works only with virtio front devices).

Will do. Actually good point that there is no effect for e1000 driver so it makes sense to exclude vhost from the qemu config in this case - done (we have one unit test for LEGACY mode which will now check that vhost=on is not present).

That's even better, thanks!

@OhmSpectator OhmSpectator merged commit 2db86b8 into lf-edge:master Nov 17, 2024
41 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants