Proxmox feels like a more apt comparison, as they both act like a controlplane for KVM virtual-machines and LXC containers across one or multiple hosts.
If you are interested in running kubernetes on top of incus, that is your kubernetes cluster nodes will be made up of KVM or LXC instances - I highly recommend the cluster-api provider incus https://github.com/lxc/cluster-api-provider-incus
This provider is really well done and maintained, including ClusterClass support and array of pre-built machine images for both KVM and LXC. It also supports pivoting the mgmt cluster on to a workload cluster, enabling the mgmt cluster to upgrade itself which is really cool.
Doesn't Vagrant spin up full VMs? Incus/LXD/LXC is about system containers. So like Docker, but with a full distro including init system running inside the container. They are much faster to spin up and have the best resource sharing possible.
Vagrant is not the right comparison against Incus for this use case. Vagrant is used to spin up VM or system container instances that are configured for software development and testing. But vagrant doesn't create those VMs or containers by itself. Instead, it depends on virtualization/container providers like VMware, Virtualbox, libvirt or lxc. In fact, you could create a provider plugin for vagrant to use Incus as its container/VM back end. (I couldn't find any, though.)
Use cases are almost the same as Proxmox. You can orchestrate system containers or VMs. Proxmox runs lxc container images, while incus is built on top of lxc and has its own container images.
System vs Application containers: Both share virtualized kernels. Application containers usually run only a single application like web app (eg: OCI containers). System containers are more like VMs with systemd and multiple apps managed by it. Note: This differentiation is often ambiguous.
> Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?
Yes, you could build a private cloud with it.
> When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?
You would use it when you need traditional application setups inside VMs or system containers, instead of containerized microservices.
I actually use Incus containers as host nodes for testing full fledged multinode K8s setups.
I know some webhosting provider that used one VM for every user. Now they moved to using this. Firstly low resource usage. If one uses ZFS or btrfs then one can save storage as in common bits are not duplicated across system containers. Note this is a system container - not traditional container. This one can be rebooted to get the previous state. It is not ephemeral.
System container tech like Incus powers efficient Linux virtualization environments for developers, so that you can have only one VM but many "machines". OrbStack machines on macOS work like this, and the way WSL works is similar (one VM and one Linux kernel, many guests sharing that kernel via system containers).
Instead of ephemeral containers, you have instances that are like VM (and incus can manage VM via qemu), so pretty much everything you would use a VM for, but if you do not need the kernel separation. It's more similar to FreeBSD jails than to docker.
the features worth mentioning imho are the different storage backends and their features. Using btrfs, lvm or zfs there is some level of support of thin copy provisioning and snapshotting. I believe btrfs/zfs have parity in terms of supported operations. Cheap snapshots and provisioning of both containers and VMs using the same tool is pretty awesome.
I personally use lxd for running my homelab VMs and containers
How do you handle updating the machine that Incus itself runs on? I imagine you have to be super careful not to introduce any breakage, because then all the VMs/containers go down.
What about kernel updates that require reboots? I have heard of ksplice/kexec, but I have never seen them used anywhere.
Yes, it uses QEMU under the hood for VMs and runs LXC containers. But also, since recently, you can run docker images in it. Very handy, especially since it has 1st class remote support, meaning you can install only the incus client and when doing `incus launch` or whatever, it will transparently start the container/vm on your remote host
Nothing about resource (net, io, disk, cpu) isolation, limits, priorities, or guarantees. Not the same as a type 1 hypervisor. These qualities are needed to run things safely and predictably in the real world™, at scale. Also, accounting and multitenancy if it's going to be used as some sort VAR or VPS offering.
Fun fact, Incus is being used as underlying infrastructure for the NorthSec CTF, i.e. in an "as hostile as it can get" environment. If you have close to a hundred teams of hackers on your systems trying to break stuff, I think it speaks for Incus and its capabilities regarding isolation and limits.
In case you are interested, Zabbly has some interesting behind-the-scenes on Youtube (not affiliated).
Short answer: No. Long answer: Depends upon what you use lxc for.
Incus is not a replacement for lxc. It's an alternative for LXD (LXD project is still active). Both Incus and LXD are built upon liblxc (the library version of lxc) and provide a higher level user interface than lxc (eg: projects, cloud-init support, etc). However, lxc gives you fine grained control over container options (this is sort of like flatpak and bubblewrap).
So, if you don't need the fine grained control of lxc, Incus may be a more ergonomic solution.
PS: Confusingly enough, LXD's CLI is also named lxc.
Is there some kind of Terraform/Pulumi integration to make it easy to deploy stuff to some VM running Incus for my deployments? Or I'm missing the point of what Incus is for?
I'm a Pulumi user myself, and I haven't seen a Pulumi provider for Incus yet. Once I get further into my Incus experiments, if someone hasn't made an Incus provider yet, I'll probably go through the TF provider conversion process.
Incus is like a cloud management software - especially in cluster mode. It has management API like many cloud services. So, yes, there's a terraform provider for Incus, which can be used to configure and provision instances. Guest setup can be managed using cloud-init. Ansible is also an alternative option for this.
A little bit of context about where Incus came from:
https://lwn.net/Articles/940684/
So it looks like a Proxmox alternative, this [0] goes into some reasons to switch. Main selling point seems to be fully OSS and no enterprise version.
[0]: https://tadeubento.com/2024/replace-proxmox-with-incus-lxd/
It’s more like a Kubernetes alternative
Proxmox feels like a more apt comparison, as they both act like a controlplane for KVM virtual-machines and LXC containers across one or multiple hosts.
If you are interested in running kubernetes on top of incus, that is your kubernetes cluster nodes will be made up of KVM or LXC instances - I highly recommend the cluster-api provider incus https://github.com/lxc/cluster-api-provider-incus
This provider is really well done and maintained, including ClusterClass support and array of pre-built machine images for both KVM and LXC. It also supports pivoting the mgmt cluster on to a workload cluster, enabling the mgmt cluster to upgrade itself which is really cool.
I was surprised to come across this provider by chance as for some reason it's not listed on the CAPI documentation provider list https://cluster-api.sigs.k8s.io/reference/providers
Not really, Kubernetes does a lot of different things that are out of scope for incus or lxd or docker compose for that matter or any hypervisor or …
4 replies →
Incus is great when developing ansible playbooks. The main benefit for me over docker/podman is systemd works out of the box in incus containers.
I have never actually tested it, but my understanding is that systemd also works out of the box inside of podman containers: https://docs.podman.io/en/latest/markdown/podman-run.1.html#...
Not to mention the easy to use web UI.
What makes it better than Vagrant for this use-case?
Doesn't Vagrant spin up full VMs? Incus/LXD/LXC is about system containers. So like Docker, but with a full distro including init system running inside the container. They are much faster to spin up and have the best resource sharing possible.
1 reply →
Vagrant is not the right comparison against Incus for this use case. Vagrant is used to spin up VM or system container instances that are configured for software development and testing. But vagrant doesn't create those VMs or containers by itself. Instead, it depends on virtualization/container providers like VMware, Virtualbox, libvirt or lxc. In fact, you could create a provider plugin for vagrant to use Incus as its container/VM back end. (I couldn't find any, though.)
https://github.com/hashicorp/vagrant/blob/v2.4.7/LICENSE for one thing
The only tool I found which allows to easily spin up pre-configured VMs without any gui hassle
Can someone explain the usecase for this?
Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?
When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?
> Can someone explain the usecase for this?
Use cases are almost the same as Proxmox. You can orchestrate system containers or VMs. Proxmox runs lxc container images, while incus is built on top of lxc and has its own container images.
System vs Application containers: Both share virtualized kernels. Application containers usually run only a single application like web app (eg: OCI containers). System containers are more like VMs with systemd and multiple apps managed by it. Note: This differentiation is often ambiguous.
> Is this for people who want to run their own cloud provider, or that need to manage the infrastructure of org-owned VM's?
Yes, you could build a private cloud with it.
> When would you use this over k8s or serverless container runtimes like Fargate/Cloudrun?
You would use it when you need traditional application setups inside VMs or system containers, instead of containerized microservices.
I actually use Incus containers as host nodes for testing full fledged multinode K8s setups.
I know some webhosting provider that used one VM for every user. Now they moved to using this. Firstly low resource usage. If one uses ZFS or btrfs then one can save storage as in common bits are not duplicated across system containers. Note this is a system container - not traditional container. This one can be rebooted to get the previous state. It is not ephemeral.
System container tech like Incus powers efficient Linux virtualization environments for developers, so that you can have only one VM but many "machines". OrbStack machines on macOS work like this, and the way WSL works is similar (one VM and one Linux kernel, many guests sharing that kernel via system containers).
Thanks -- though I'm not sure I fully grok how this is different than something like Firecracker?
1 reply →
There's no particular usecase, though I do know of a company whose entire infrastructure is maintained within incus.
I personally use it mostly for deploying a bunch of system containers and some OCI containers.
But anyone who uses LXC, LXD, docker, libvirt, qemu etc. could potentially be interested in Incus.
Incus is just an LXD fork btw, developed by Stephane Graber.
Who also developed LXD and contributed to LXC. I wouldn’t say it’s just a fork but a continuation of the project without Canonical.
2 replies →
I went through the online tutorial, but I'm not really seeing how it's different from docker?
Instead of ephemeral containers, you have instances that are like VM (and incus can manage VM via qemu), so pretty much everything you would use a VM for, but if you do not need the kernel separation. It's more similar to FreeBSD jails than to docker.
It's a difference between system containers and application containers.
LXC containers used in incus run their own init, they act more like a VM.
However incus can also execute actual VMs via libvirt and since recently even OCI containers like docker.
I first learned about this because colima supports it: https://github.com/abiosoft/colima#incus
the features worth mentioning imho are the different storage backends and their features. Using btrfs, lvm or zfs there is some level of support of thin copy provisioning and snapshotting. I believe btrfs/zfs have parity in terms of supported operations. Cheap snapshots and provisioning of both containers and VMs using the same tool is pretty awesome.
I personally use lxd for running my homelab VMs and containers
How do you handle updating the machine that Incus itself runs on? I imagine you have to be super careful not to introduce any breakage, because then all the VMs/containers go down.
What about kernel updates that require reboots? I have heard of ksplice/kexec, but I have never seen them used anywhere.
As with any such system, you need a spare box. Upgrade the spare, move the clients to it, upgrade the original.
But then the clients have downtime while they’re being moved.
9 replies →
What can this work with? It says „Containers and VMs“ - I guess that’s LXCs and QEMU VMs?
Yes, it uses QEMU under the hood for VMs and runs LXC containers. But also, since recently, you can run docker images in it. Very handy, especially since it has 1st class remote support, meaning you can install only the incus client and when doing `incus launch` or whatever, it will transparently start the container/vm on your remote host
Nothing about resource (net, io, disk, cpu) isolation, limits, priorities, or guarantees. Not the same as a type 1 hypervisor. These qualities are needed to run things safely and predictably in the real world™, at scale. Also, accounting and multitenancy if it's going to be used as some sort VAR or VPS offering.
Fun fact, Incus is being used as underlying infrastructure for the NorthSec CTF, i.e. in an "as hostile as it can get" environment. If you have close to a hundred teams of hackers on your systems trying to break stuff, I think it speaks for Incus and its capabilities regarding isolation and limits.
In case you are interested, Zabbly has some interesting behind-the-scenes on Youtube (not affiliated).
I would guess <https://www.youtube.com/watch?v=7A1yrLRNIp0> is a good starting point "Looking at the NorthSec infrastructure" from April, 2024
The YT description also points to https://github.com/zabbly/incus
Took a few seconds of googling to find this: https://linuxcontainers.org/incus/docs/main/reference/instan...
Incus supports Qemu/KVM VMs. And KVM is arguably a Type 1 hypervisor since it's part of the Linux kernel. So I guess it qualifies?
Should lxc user migrate to incus?
Short answer: No. Long answer: Depends upon what you use lxc for.
Incus is not a replacement for lxc. It's an alternative for LXD (LXD project is still active). Both Incus and LXD are built upon liblxc (the library version of lxc) and provide a higher level user interface than lxc (eg: projects, cloud-init support, etc). However, lxc gives you fine grained control over container options (this is sort of like flatpak and bubblewrap).
So, if you don't need the fine grained control of lxc, Incus may be a more ergonomic solution.
PS: Confusingly enough, LXD's CLI is also named lxc.
an lxd user should
LXD is actually pronounced “lex-d” by the community and similarly, LXC is “lex-c”.
Not to be confused with the cirrus7 incus[0], which are fanless PC models based on the ASRock DeskMini series that I'm using right now.
[0] https://www.cirrus7.com/produkte/cirrus7-incus/
or ear bone: https://en.m.wikipedia.org/wiki/Incus
Is there some kind of Terraform/Pulumi integration to make it easy to deploy stuff to some VM running Incus for my deployments? Or I'm missing the point of what Incus is for?
There is a Terraform provider that is actively maintained, in addition to Ansible integration. https://linuxcontainers.org/incus/docs/main/third_party/
I'm a Pulumi user myself, and I haven't seen a Pulumi provider for Incus yet. Once I get further into my Incus experiments, if someone hasn't made an Incus provider yet, I'll probably go through the TF provider conversion process.
Incus is like a cloud management software - especially in cluster mode. It has management API like many cloud services. So, yes, there's a terraform provider for Incus, which can be used to configure and provision instances. Guest setup can be managed using cloud-init. Ansible is also an alternative option for this.
Or even one of each, since cloud-init has an ansible hook: https://cloudinit.readthedocs.io/en/latest/reference/modules... for something like ansible-pull behavior
1 reply →
You could use cloud-init