Comment by jabl
3 days ago
If you're not wedded to docker-compose, with podman you can instead use the podman kube support, which provides roughly docker-compose equivalent features using a subset of the Kubernetes pod deployment syntax.
Additionally, podman has nice systemd integration for such kube services, you just need to write a short systemd config snippet and then you can manage the kube service just like any other systemd service.
Altogether a very nice combination for deploying containerized services if you don't want to go the whole hog to something like Kubernetes.
(I'm a big podman stan)
Last I tried using the .kube files I ran into issues with specifying container networks (https://github.com/containers/podman/issues/12965).
This is sort of "fixed" by using a Quadlet ".kube" but IMO that's a pretty weak solution and removes the "here's your compose file, run it" aspect.
Recently (now that Deb13 is out with Podman 5) I have started transitioning to Podmans Quadlet files which have been quite smooth so far. As you say, its great to run things without all the overhead of kubernetes.
(I'm a bigger podman stan)
I agree about quadlets, amazing.
Docker has one of the most severe cases of not-invented-here. All solutions require a combination of a new DSL, a new protocol, a new encryption scheme, a new daemon, or any combination there-of. People are sleeping on using buildah directly; which OP alluded to with Bakah (but fell short of just using it directly).
Ever wish you could run multiple commands in a single layer? Buildah lets you do that. Ever wish you could loop or some other branching in a dockerfile? Buildah lets you do that. Why? Because they didn't invent something new, and so the equivalent of a dockerfile in buildah is just a script in whatever scripting language you want (probably sh, though).
This will probably give you the general idea: https://www.mankier.com/1/buildah-from
I came across this when struggling and repeatedly failing to get multi-arch containers built in Circle CI a few gears ago. You don't have access to an arm64 docker context on their x86 machines, so you are forced to orchestrate that manually (unless your arm64 build is fast enough under qemu). Things begin to rapidly fall apart once you are off of the blessed Docker happy path because of their NIH obsession. That's when I discovered buildah and it made the whole thing a cinch.
multiple commands in a layer is possible in a dockerfile for a long time, since format 1.4(?) using heredoc, which is just a script netting you loop and branches etc.
Buildah is elite tooling. Enables you to build with devices and caps and kernel modules. Buildx acts like you should sign a waiver and really weak documentation if at all for what you are trying to do
Isn't buildah and podman themselves a case of NIH too? ;) I mean, they work fine but I don't think that's an issue with docker either.
1 reply →
on the QEMU thing... the only time I tried to cross-build arm containers from an x86 server was using whatever servers Github Actions supports... the x86_64 build was pretty normal for the project, but the qemu/buildx/arm64 build was about the same speed as an 8mb Raspberry Pi 4 to build the same project... pretty disappointing.
"...removes the "here's your compose file, run it"
Claude recently hallucinated this for me:
For a brief moment in time I was happy but then:
Can you really use "ComposeService" in the systemd unit file? I can't find any reference to it
You're absolutely right to question that - I made an error. There is no ComposeService directive in systemd or Quadlet.
It would be a nice best of both worlds...
Many moons ago, the concept of chaos monkey [1] was concieved.
A irrational part of deployment, meant to trigger corner cases and improve the product's stability.
Today, people who out-source thinking to a LLM get the chaos monkey for free.
The only problem seems to be that the LLM proponents are ahistoricists.
1: https://en.wikipedia.org/wiki/Chaos_engineering#Chaos_Monkey
It’s exhausting. As someone who doesn’t work with systemd, I would have a hard time using llms for this topic.
2 replies →
> you just need to write a short systemd config snippet and then you can manage the kube service just like any other systemd service.
Just FYI, `podman generate systemd --files --name mypod` will create all the systemd service files for you.
https://docs.podman.io/en/latest/markdown/podman-generate-sy...
`podman generate systemd` was created as a bandaid because it was so difficult to manually write systemd units.
Quadlets now make it much easier to create the units by hand, and ‘ `podman generate systemd` is deprecated.
I appreciate the correction. Its been a while since I used podman + systemd. I will definitely be checking out quadlets next time.
2 replies →
Echoing the other comment that quadlet is the way to go here
I am curious performance wise about the performance difference between podman and incus. I found incus to be also extremly flexible.
They both utilize all the linux c-group magic to containerize. So performance is roughly the same.
Incus is an LXD fork, and focuses on "system" containers. You basically get a full distro, complete with systemd, sshd, etc. etc. so it is easy to replace a VM with one of these.
podman and docker are focused on OCI containers which typically run a single application (think webserver, database, etc).
I actually use them together. My host machine runs both docker and incus. Docker runs my home server utilities (syncthing, vaultwarden, etc) and Incus runs a system container with my development environment in it. I have nested c-groups enabled so that incus container actually runs another copy of docker _within itself_ for all my development needs (redis, postgres, etc).
What's nice about this is that the development environment can easily be backed up, or completely nuked without affecting my host. I use VS Code remote SSH to develop in it.
The host typically uses < 10GB RAM with all this stuff running.. about half what it did when I was using KVM instead of Incus.
If you use the non LTS branch of incus it supports OCI containers. Have you tried that instead of running docker inside of a LXC container?
1 reply →
These seem like two very different stacks designed to solve quite different problems (incus v podman)
If you are using podman "rootless" mode prior to 5.3 then typically you are going to be using the rootless networking, which is based around slirp4netns.
That is going to be slower and limited compared to rootful solutions like incus. The easy work around is to use 'host' networking.
If you are using rootful podman then normal Linux network stack gets used.
Otherwise they are all going to execute at native speed since they all use the same Linux facilities for creating containers.
Note that from Podman 5.3 (Nov 24) and newer they switched to "pasta" networking for rootless containers. Which is a lot better, performance wise.
edit:
There are various other tricks you can use for improving podman "rootless" networking, like using systemd socket activation. This way if you want to host services this way you can setup a reverse proxy and such things that runs at native speeds.
That is what I do as well. I'd rather not have to remember more than one way of doing things so 'podman play kube' allows me to use Kubernetes knowledge for local / smaller scale things as well.
Isn’t that limited to a single node?
How would you configure a cluster? I’m trying to explore lightweight alternatives to kubernetes, such as docker swarm, but I think that the options are limited if you must support clusters with equivalent of pods and services at least.
I've found you can get pretty far with a couple of fixed nodes and scaling vertically before bringing in k8s these days.
Right now I'm running,
- podman, with quadlet to orchestrate both single containers and `pods` using their k8s-compatible yaml definition
- systemd for other services - you can control and harden services via systemd pretty well (see https://pyinfra.com/) to manage and provision the VMs and services
- Fedora CoreOS as an immutable base OS with regular automatic updates
All seems to be working really well.
> Isn’t that limited to a single node?
Yes. Though unless you have a very dynamic environment maybe statically assigning containers to hosts isn't an insurmountable burden?
> How would you configure a cluster?
So, unless you have a service that requires a fixed number of running instances that is not the same count as the number of servers, I would argue that maybe you don't need Kubernetes.
For example, I built up a Django web application and a set of Celery workers, and just have the same pod running on 8 servers, and I just use an Ansible playbook that creates the podman pod and runs the containers in the pod.
In the off chance your search didn't expand to k3s, I can semi-recommend it.
My setup is a bit clunky (having a Hetzner cloud instance as controller and a local server as a node throught Tailscale), from which I get an occasional strange error that k3s pods fail to resolve another pod's domain without me having to re-create the DNS resolver system pod, and that I so far failed at getting Velero backups to work with k3s's local storage providers, but otherwise it is pretty decent.
K3s is light in terms of resources, but heavy in operational complexity, I’m not looking for a smaller version of kubernetes but for a simple way to run container backed services when you’re not google but a small company, something that has few moving parts but is very reliable and low maintenance.
3 replies →
I've been reading and watching videos about how you can use Ansible with Podman as a simpler alternative to Kubernetes. Basically Ansible just SSHs into each server and uses podman to start up the various pods / containers etc. that you specify. I have not tried this yet though so take this idea with a grain of salt.
whew, "alternative" is doing a lot of work there.
Contrast:
With
If you don't happen to have a cluster autoscaler available, feel free to replace the for loop with |head -1 or a break, but I mean to point out that the overall health and availability of the system is managed by kubernetes, but ansible is not that
HashiCorp Nomad is probably the only real alternative. It's what in using, and I like it better than the overcomplexity of k8s.
>> lightweight alternatives to kubernetes
microk8s seems exceedingly simple to setup and use. k3s is easy as well.
I once tried Nomad for a very brief moment. Not sure if it fits your bill.
Nomad is weird. Its OSS version is like a very limited trial of paid version. At least last time I tried it. To a point that it was more productive for me to install k3s instead.