Comment by forrestthewoods
12 hours ago
False. The exact opposite of bad.
The “system” should provide the barest minimum of libraries. Programs should ship as many of their dependencies as is technically feasible.
Oh what’s that? Are crying about security updates? Yeah well unfortunately you shipped everything in a Docker container so you need to rebuild and redeploy all of your hierarchical images anyways.
> Programs should ship as many of their dependencies as is technically feasible.
Shipping in a container just is "ship[ping] as many [...] dependencies as is technically feasible". It's "all of them except the kernel". The "barest minimum of libraries" is none.
Someone who's using Docker is already doing what you're describing anyway. So why are you scolding them as if they aren't?
> False. The exact opposite of bad.
I don't mind stable base systems, I don't mind slow and well tested updates, I actively like holding stable ABIs, but if you haven't updated anything in 4 years, then you are missing bug and security fixes. Not everything needs to be Arch, but this opposite extreme is also bad.
> The “system” should provide the barest minimum of libraries. Programs should ship as many of their dependencies as is technically feasible.
And then application developers fail to update their vendored dependencies, and thereby leave their users exposed to vulnerabilities. (This isn't hypothetical, it's a thing that has happened.) No, thank you.
>Oh what’s that? Are crying about security updates? Yeah well unfortunately you shipped everything in a Docker container so you need to rebuild and redeploy all of your hierarchical images anyways.
So... are you arguing that we do need to ship everything vendored in so that it can't be updated, or that we need to actually break out packages to be managed independently (like every major Linux distribution does)? Because you appear to have advocated for vendoring everything, and then immediately turned around to criticize the situation where things get vendored in.
> I don't mind stable base systems, I don't mind slow and well tested updates, I actively like holding stable ABIs, but if you haven't updated anything in 4 years, then you are missing bug and security fixes.
I'm not sure GP's claim here about the runtime not changing in 4 years is actually true. There hasn't been a version number bump, but files in the runtime have certainly changed since it's initial release in 2021, right? See: https://steamdb.info/app/1628350/patchnotes/
It looks to me like it gets updated all the time, but they just don't change the version number because the updates don't affect compatibility. It's kinda opaque though, so I'm not totally sure.
> So... are you arguing that we do need to ship everything vendored in so that it can't be updated,
I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.
Many people disagree with that claim and think that TheLinuxModel is good actually. However I point that these people almost definitely make extensive use of Docker. And that Docker (or similar) are actually necessary to reliably run programs on Linux because TheLinuxModel is so bad and has failed so badly.
If you believe in TheLinuxModel and also do not use Docker to deploy your software then you are, in the year 2025, a very rare outlier.
Personally, I am very pro ShipYourFuckingDependencies. But I also dont think that deploying a program should be much more complicated than sharing an uncompressed zip file. Docker adds a lot of crusting. Packaging images/zips/deployments should be near instantaneous.
> Many people disagree with that claim and think that TheLinuxModel is good actually. However I point that these people almost definitely make extensive use of Docker
You've got the wrong audience here. Nix people are neither big fans of "the Linux model" (because Nix is founded in part on a critique of the FHS, a core part and source of major problems with "the Linux model") nor rely heavily on Docker to ship dependencies. But if by "the Linux model" you just mean not promising a stable kernel ABI, pulling an OS together from disparate open-source projects, and key libraries not promising eternal API stability, it might have some relevance to Nixers...
> I also dont think that deploying a program should be much more complicated than sharing an uncompressed zip file. Docker adds a lot of crusting. Packaging images/zips/deployments should be near instantaneous.
Your sense of "packaging" conflates two different things. One aspect of packaging is specifying dependencies and how software gets built in the first place in a very general way. This is the hard part of packaging for cohesive software distributions such as have package managers. (This is generally not really done on platforms like Windows, at least not in a unified or easily interrogable format.) This is what an RPM spec does, what the definition of a Nix package does, etc.
The other part is getting built artifacts, in whatever format you have them, into a deployable format. I would call this something like "packing" (like packing an archive) rather than "packaging" (which involves writing some kind of code specifying dependencies and build steps).
If you've done the first step well— by, for instance, writing and building a Nix package— the second step is indeed trivial and "damn near instantaneous". This is true whether you're deploying with `nix-copy-closure`/`nix copy`, which literally just copy files[1][2], or creating a Docker image, where you can just stream the same files to an archive in seconds[3].
And the same packaging model which enables hermetic deployments, like Docker but without requiring the use of containers at all, does still allow keeping only a single copy of common dependencies and patching them in place.[4]
--
1: https://nix.dev/manual/nix/2.30/command-ref/nix-copy-closure...
2: https://nix.dev/manual/nix/2.30/command-ref/new-cli/nix3-cop...
3: https://github.com/nlewo/nix2container
4: https://guix.gnu.org/blog/2020/grafts-continued/
> I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.
That is a very silly argument considering that Docker is built on primitives that Linux exposes. All Docker does is make them accessible via a friendly UI, and adds some nice abstractions on top such as images.
It's also silly because there is no single "Linux model". There are many different ways of running applications on Linux, depending on the environment, security requirements, user preference, and so on. The user is free to simply compile software on their own if they wish. This versatility is a strength, not a weakness.
Your argument seems to be against package managers as a whole, so I'm not sure why you're attacking Linux. There are many ecosystems where dependencies are not vendored and a package manager is useful, viceversa, or even both.
There are very few objectively bad design decisions in computing. They're mostly tradeoffs. Choosing a package manager vs vendoring is one such scenario. So we can argue endlessly about it, or we can save ourselves some time and agree that both approaches have their merits and detriments.
5 replies →