Comment by scottjg
1 day ago
two semi interesting things to note around this:
1. Virtualization.framework seems to support some form of GPU passthrough from the host (granted, not eGPU - it's for the integrated GPU). I think the primary use case is having macOS guests get acceleration, while still sharing GPU time with the host. There is also a patch that recently hit QEMU mainline that supports using the "venus server" with virtio-gpu to support a similar functionality for Linux guests under Hypervisor.framework.
2. Apple internally has some kind of PCI Passthrough support available in Virtualization.framework. It seems like the code is shipped to customers in the framework, but it relies on some kind of kext or kernel component that isn't shipped in retail macOS. I can't say if that's intended to ever be released to customers, but clearly someone at Apple has thought about this the feature.
I experimented with booting Arm macOS 14-26 in QEMU a while back, building on the work of Alexander Graf for macOS 12-13, and reverse-engineered substantial parts of Hypervisor.framework, the in-kernel hypervisor, and a bit of Virtualization.framework. Got newer versions of Sequoia to boot past the log in screen, with GPU acceleration too.
Unless there's another method I missed, the internal GPU "pass through" of Virtualization.framework you're thinking of might actually just be paravirualization, at least that's what the name suggests. It's implemented in the public ParavirtualizedGraphics framework [0], albeit for PG on Arm macOS, the relevant interfaces are private [1]. I haven't looked that deep into it per se, but, fixing the bugs around it, I've run into a few clues suggesting that it's just a command stream + shared memory being passed around. It also uses its own generic driver on the guest side.
Great job, by the way! Love how authors of pieces like this casually come here to comment :)
[0] https://developer.apple.com/documentation/paravirtualizedgra...
[1] https://github.com/qemu/qemu/blob/edcc429e9e41a8e0e415dcdab6...
FYI: https://patchew.org/QEMU/20260324204855.29759-1-mohamed@unpr...
There's some randomness around Tahoe for FileVault and it crashing because Data is detected as not encrypted (and that's not OK on bare metal). If hitting that case you might need to enable FileVault inside the VM (and remember to sync aux storage afterwards if not done)
Looks like someone beat me to it! Thanks!
I see the author of this patch set has run into a few similar issues as me. Strangely, not all of them: I don't see patches for the new PCI MSI-X device introduced somewhere in Sequoia (IIRC), a source of kernel panics for me; and there's still a bug in the PG MMIO path that casts all writes to 32-bit, this one caused me a lot of headaches (no errors but no video adapter detected). I'm somewhat surprised to hear that this works!
There's two significant problems that we both have came across:
- LLVM now favoring pre-indexed load/stores which trap with ISV=0 for MMIO accesses, and those ending up in the GIC initialization path of the newer macOS kernels (looks like there's a separate patch set for this [0]),
- Hypervisor.framework trapping PAC HVC calls.
It seems like the latter has been worked around here by signing QEMU with an Apple-private entitlement and running with SIP off, but there's actually a different way! While some HVCs are trapped right in the host kernel, the PAC trapping happens within Hypervisor.framework itself (at least in the host OSes I tested). It's possible to patch out this functionality without special privileges or talk to the in-kernel hypervisor directly. I originally tested with the former, and chose to implement the latter as a separate accel in the code I was planning to submit upstream, but the complexity of it exploded and, besides confirming that it would have worked, I haven't managed to finish my implementation.
Does the Tahoe crash you mentioned manifest itself in stage 2 iBoot panics? I must admit 26 was never quite my priority so I haven't looked into it, but if so, it might have been closer to booting than I thought :)
[0] https://lists.nongnu.org/archive/html/qemu-devel/2026-04/msg...
6 replies →
thanks!
there also appears to be a generic pci passthrough path. we were discussing it on the qemu-devel list: https://lore.kernel.org/qemu-devel/C35B5E97-73F2-4A60-951B-B...
Oh, thanks for letting me know, and for the upstreaming work too! I might join the party once I find some more time :)