Comment by gavinray
20 hours ago
Can someone explain the usecase for this?
Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?
When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?
> Can someone explain the usecase for this?
Use cases are almost the same as Proxmox. You can orchestrate system containers or VMs. Proxmox runs lxc container images, while incus is built on top of lxc and has its own container images.
System vs Application containers: Both share virtualized kernels. Application containers usually run only a single application like web app (eg: OCI containers). System containers are more like VMs with systemd and multiple apps managed by it. Note: This differentiation is often ambiguous.
> Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?
Yes, you could build a private cloud with it.
> When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?
You would use it when you need traditional application setups inside VMs or system containers, instead of containerized microservices.
I actually use Incus containers as host nodes for testing full fledged multinode K8s setups.
I know some webhosting provider that used one VM for every user. Now they moved to using this. Firstly low resource usage. If one uses ZFS or btrfs then one can save storage as in common bits are not duplicated across system containers. Note this is a system container - not traditional container. This one can be rebooted to get the previous state. It is not ephemeral.
System container tech like Incus powers efficient Linux virtualization environments for developers, so that you can have only one VM but many "machines". OrbStack machines on macOS work like this, and the way WSL works is similar (one VM and one Linux kernel, many guests sharing that kernel via system containers).
Just in case - I'm using LXD inside my WSL and it working great. BTRFS backed storage via loopfile, saves $$$.
For others, why it may be useful in regular sysadmin job:
* say doing Ansible scripting against LOCAL network is hell amount of time faster than against 300+ ms remote machines
* creating test MariaDB 3 nodes cluster - easy peasy
* multiple distros available - need to debug Haproxy from say Rocky 8 linux? Check!
Thanks -- though I'm not sure I fully grok how this is different than something like Firecracker?
Firecracker has some elements in common-- namely images that are to run on Firecracker have a non-standard init system so that they can boot more quickly than machines that are dealing with real hardware and a wider variety of use cases. That's also typically used for the guest VMs that host containers for systems like WSL and OrbStack.
But Firecracker is fundamentally different because it has a different purpose: Firecracker is about offering VM-based isolation for systems that have container-like ephemerality in multitenant environments, especially the cloud. So when you use Firecracker, each system has its own kernel running under its own paravirtualized hardware.
With OrbStack and WSL, you have only one kernel for all of your "guests" (which are container guests, rather tha hardware paravirtualized guests). In exchange you're working with something that's simpler in some ways, more efficient, has less resource contention, etc. And it's easier to share resources between containers dynamically than across VMs, so it's very easy to run 10 "machines" but only allocate 4GB of RAM or whatever, and have it shared freely between them with little overhead. They can also share Unix sockets (like the socket for Docker or a Kubernetes runtime) directly as files, since they share a kernel-- no need for some kind of HTTP-based socket forwarding across virtualized network devices.
I imagine this is nice for many use cases, but as you can imagine, it's especially nice for local development. :)
There's no particular usecase, though I do know of a company whose entire infrastructure is maintained within incus.
I personally use it mostly for deploying a bunch of system containers and some OCI containers.
But anyone who uses LXC, LXD, docker, libvirt, qemu etc. could potentially be interested in Incus.
Incus is just an LXD fork btw, developed by Stephane Graber.
Who also developed LXD and contributed to LXC. I wouldn’t say it’s just a fork but a continuation of the project without Canonical.
You're right, I should've worded it differently.
>a continuation of the project without Canonical.
This being a big plus considering Canonical just outed itself as a safe space for pedophiles. Not being a pedo-bar is a good thing.