Docker's what lets me spend more time using the software on my server than fiddling with it. I got the fiddling out of my system years ago, I just want shit to work now.
I don't really care about it per se but having a cross-distro unified daemon config & supervisor, package manager, and ability to cram every single important file into a file tree (again, using the same interface to achieve this with every daemon) that has only those files in it (making backups and restores trivial) and then easily verify that I got all the important files (destroy image -> re-create, does it still look good? Then I got everything) makes everything so easy. I no longer put off trying a new service until the weekend because it'll take an unknown amount of time that could end up being hours. Odds are I can have anything available in the official Docker registry (which is approximately everything, these days) up in five minutes flat to try it out, and may not even need any further modifications for it to be ready for (personal) "production".
I use Debian but don't even care, I haven't had to touch systemd once (thank god) and the only Debian-parts I even use are its ZFS, SSH, and Docker, with my ten or so actual user-facing services all just pulled and managed via Docker, ready to transfer to any other distro seamlessly, should I ever care to. Even Samba is under Docker (oh my god it is so much easier to configure for common use-cases this way).
(I would definitely be using FreeBSD on my server if I cared about anything other than Docker, though—I haven't actually liked Linux for about fifteen years now)
I've fixed too many linux-isms in code manually over the years, pkg/ports does what it should but since *nix tradition relies on hardcoded paths I often wanted out-of-tree builds for various software.
And that's the thing, as I grow older I feel more and more that I just want the parts of computing that I don't want to _care_ about to be stupid simple.
If I'm doing a program needing a recent version of some language that doesn't have a FreeBSD port yet and some database behind it, I don't really want to configure it all manually because I don't particularly care for porting the runtime or managing the database (that isn't exposed to the outside world anyhow).
This is stuff where I don't want a large CI pipe or other management (or needing to remember to upgrade the packages if I upgrade the hosting OS).
Stuff like this is why "the clouds are winning", friction should be linear depending on the effort I want to put into managing something.
But going for real HW or even VPS a places an "upgrade tax" on me because I can't just let non-public services like an isolated DB just ride-along over major versions (maybe Jails with isolated userlands could be an option, but that becomes painful instead when needing newer versions of the application behind the veil).
Docker's what lets me spend more time using the software on my server than fiddling with it. I got the fiddling out of my system years ago, I just want shit to work now.
I don't really care about it per se but having a cross-distro unified daemon config & supervisor, package manager, and ability to cram every single important file into a file tree (again, using the same interface to achieve this with every daemon) that has only those files in it (making backups and restores trivial) and then easily verify that I got all the important files (destroy image -> re-create, does it still look good? Then I got everything) makes everything so easy. I no longer put off trying a new service until the weekend because it'll take an unknown amount of time that could end up being hours. Odds are I can have anything available in the official Docker registry (which is approximately everything, these days) up in five minutes flat to try it out, and may not even need any further modifications for it to be ready for (personal) "production".
I use Debian but don't even care, I haven't had to touch systemd once (thank god) and the only Debian-parts I even use are its ZFS, SSH, and Docker, with my ten or so actual user-facing services all just pulled and managed via Docker, ready to transfer to any other distro seamlessly, should I ever care to. Even Samba is under Docker (oh my god it is so much easier to configure for common use-cases this way).
(I would definitely be using FreeBSD on my server if I cared about anything other than Docker, though—I haven't actually liked Linux for about fifteen years now)
I've fixed too many linux-isms in code manually over the years, pkg/ports does what it should but since *nix tradition relies on hardcoded paths I often wanted out-of-tree builds for various software.
And that's the thing, as I grow older I feel more and more that I just want the parts of computing that I don't want to _care_ about to be stupid simple.
If I'm doing a program needing a recent version of some language that doesn't have a FreeBSD port yet and some database behind it, I don't really want to configure it all manually because I don't particularly care for porting the runtime or managing the database (that isn't exposed to the outside world anyhow).
This is stuff where I don't want a large CI pipe or other management (or needing to remember to upgrade the packages if I upgrade the hosting OS).
Stuff like this is why "the clouds are winning", friction should be linear depending on the effort I want to put into managing something.
But going for real HW or even VPS a places an "upgrade tax" on me because I can't just let non-public services like an isolated DB just ride-along over major versions (maybe Jails with isolated userlands could be an option, but that becomes painful instead when needing newer versions of the application behind the veil).
docker and flatpak/snap are _extremely_ different tools with very different purposes.