Comment by pxc
18 hours ago
System container tech like Incus powers efficient Linux virtualization environments for developers, so that you can have only one VM but many "machines". OrbStack machines on macOS work like this, and the way WSL works is similar (one VM and one Linux kernel, many guests sharing that kernel via system containers).
Just in case - I'm using LXD inside my WSL and it working great. BTRFS backed storage via loopfile, saves $$$.
For others, why it may be useful in regular sysadmin job:
* say doing Ansible scripting against LOCAL network is hell amount of time faster than against 300+ ms remote machines
* creating test MariaDB 3 nodes cluster - easy peasy
* multiple distros available - need to debug Haproxy from say Rocky 8 linux? Check!
Thanks -- though I'm not sure I fully grok how this is different than something like Firecracker?
Firecracker has some elements in common-- namely images that are to run on Firecracker have a non-standard init system so that they can boot more quickly than machines that are dealing with real hardware and a wider variety of use cases. That's also typically used for the guest VMs that host containers for systems like WSL and OrbStack.
But Firecracker is fundamentally different because it has a different purpose: Firecracker is about offering VM-based isolation for systems that have container-like ephemerality in multitenant environments, especially the cloud. So when you use Firecracker, each system has its own kernel running under its own paravirtualized hardware.
With OrbStack and WSL, you have only one kernel for all of your "guests" (which are container guests, rather tha hardware paravirtualized guests). In exchange you're working with something that's simpler in some ways, more efficient, has less resource contention, etc. And it's easier to share resources between containers dynamically than across VMs, so it's very easy to run 10 "machines" but only allocate 4GB of RAM or whatever, and have it shared freely between them with little overhead. They can also share Unix sockets (like the socket for Docker or a Kubernetes runtime) directly as files, since they share a kernel-- no need for some kind of HTTP-based socket forwarding across virtualized network devices.
I imagine this is nice for many use cases, but as you can imagine, it's especially nice for local development. :)