Comment by theYipster
13 days ago
IOMMU pass-through is the next feature I'm working on, but I felt it was now time to release the V1. Currently, vm-curator supports:
- VM creation with over 100 different OS profiles, built for KVM and emulation - 3D para-virtualization support using virtio-vga-gl (virgl) - UEFI and TPM support (auto-configured for OSes that need it, like Windows 11) - QCOWS2 Snapshot support. - USB Pass-through support and management.
There is also a rich metadata library with ascii art, descriptions of OSes, and fun-facts.
VM Creation with IOMMU will require the following for GPU pass-through: - a motherboard capable of proper IOMMU support. - 2+ GPUs, plus a dummy HDMI or DP1.4 plug for the passed-through GPU - Looking-Glass for display
VM-curator can host and manage other gpu-passthrough configurations, as the application supports editing each VM's launch script, but the above profile is what I'm planning to put into the creator system.
I have a TRX40 (Threadripper) motherboard, which will serve as an ample test-bed, but I still need to acquire a second GPU.
Btw, this feature is now available in v0.2.x! vm-curator supports single-gpu passthrough (tested locally,) and multi-gpu pass-through via looking-glass (experimental: needs testing.)
single-gpu-passthrough relies on a script (run outside the app) to disconnect the GPU from the current X.org or Wayland session and then to attach it to the running VM. When the VM is shut down, the script runs this process in reverse. This means you can only run one VM at a time with your main display and peripherals, and while you're running that VM, you can't access your host with your display and peripherals (you can always SSH into it while the VM is running.)
This is the common process for getting single-GPU-passthrough to work. vm-curator helps prepare the system and generates the scripts automatically.
multi-gpu-passthrough is designed to run with looking-glass, but it can also support physical KVM switching if the user prefers.
Systems with iGPU (CPU RAM) + dGPU (dedicated GPU RAM) support GPU passthrough IIUC.
With the proprietary Nvidia Linux module,
These environment variables cause processes to run on an Nvidia dGPU instead of the iGPU: https://download.nvidia.com/XFree86/Linux-x86_64/435.17/READ... :
EnvyControl and supergfxctl support selecting between modes (integrated / hybrid / nvidia) to specify whether processes run on the iGPU or the dGPU(s). https://github.com/bayasdev/envycontrol#hybrid
Bazzite has supergfxctl and the Nvidia modules installed in their OCI system images ("Native Containers"; ublue-os)
IIRC from (awhile ago) trying to run a Windows VM with GPU passthrough to the dGPU, a device selection gui would've helped
Arch wiki > Supergfxctl > 5.1 Using supergfxctl for GPU passthrough (VFIO) https://wiki.archlinux.org/title/Supergfxctl#Using_supergfxc...
Linux for ROG notebooks > VFIO dGPU Passthrough Guide > VM Creation Walkthrough: https://asus-linux.org/guides/vfio-guide/#vm-creation-walkth... ;
> After running this, the terminal will display a list of all your PCI devices, listed by their IOMMU group. Skim through the list until you find the IOMMU group that contains your dGPU.
But then under "selinux considerations" it says: https://asus-linux.org/guides/vfio-guide/#selinux-considerat... :
> /etc/libvirt/qemu.conf and find this line: