You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm going through the wiki as well as README, issues, and issues and I cannot get a clear picture regarding support for decoding/encoding capabilities of vGPU, especially when used with "unsupported" cards & vgpu_unlock.
This isn't an uncommon scenario to face, where needing multiple VMs video "mangling" support:
Is it even possible to use NVDEV/NVENC on virtual GPU instanced? Here, I'm referring to arbitrary stream manipulation rather than framebuffer encoding (e.g. in Looking Glass scenario)
How are encoding/decoding resources being divided?
Are there any limitations when using vGPU for decoding/encoding vs. using the host GPU directly?
Is this scenario documented anywhere (even on NVIDIA side), or should the wiki be updated?
The text was updated successfully, but these errors were encountered:
I'm going through the wiki as well as README, issues, and issues and I cannot get a clear picture regarding support for decoding/encoding capabilities of vGPU, especially when used with "unsupported" cards &
vgpu_unlock
.This isn't an uncommon scenario to face, where needing multiple VMs video "mangling" support:
The text was updated successfully, but these errors were encountered: