← Back to context

Comment by imiric

10 hours ago

> I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.

That is a very silly argument considering that Docker is built on primitives that Linux exposes. All Docker does is make them accessible via a friendly UI, and adds some nice abstractions on top such as images.

It's also silly because there is no single "Linux model". There are many different ways of running applications on Linux, depending on the environment, security requirements, user preference, and so on. The user is free to simply compile software on their own if they wish. This versatility is a strength, not a weakness.

Your argument seems to be against package managers as a whole, so I'm not sure why you're attacking Linux. There are many ecosystems where dependencies are not vendored and a package manager is useful, viceversa, or even both.

There are very few objectively bad design decisions in computing. They're mostly tradeoffs. Choosing a package manager vs vendoring is one such scenario. So we can argue endlessly about it, or we can save ourselves some time and agree that both approaches have their merits and detriments.

> That is a very silly argument considering that Docker is built on primitives that Linux exposes

No.

I am specifically talking about the Linuxism where systems have a global pool of shared libraries in one of several common locations (that ever so slightly differs across distros because fuck you).

Windows and macOS don’t do this. I don’t pollute system32 with a kajillion random ass DLLs. A Windows PATH is relatively clean from random shit. (Less so when Linux-first software is involved). Stuffing a million libraries into /usr/lib or other PATH locations is a Linuxism. I think this Linuxism is bad. And that it’s so bad everyone now has to use Docker just to reliably run a computer program.

Package managers for software libraries to compile programs is a different scenario I’ve not talked about in this thread. Although since you’ve got me ranting the Linuxisms that GCC and Clang follow are also fucking terrible. Linking against the random ass version of glibc on the system is fucking awful software engineering. This is why people also make Docker images of their build environment! Womp womp sad trombone everyone is fired.

I don’t blame Linux for making bad decisions. It was the 80s and no one knew better. But it is indeed an extremely bad set of design decisions. We all live with historical artifacts and cruft. Not everything is a trade off.

  • > I am specifically talking about the Linuxism where systems have a global pool of shared libraries in one of several common locations (that ever so slightly differs across distros because fuck you).

    > Windows and macOS don’t do this.

    macOS does in fact have a /usr/lib. It's treated as not to be touched by third parties, but there's always a /usr/local/lib and similar for distributing software that's not bundled with macOS just like on any other Unix operating system. The problem you're naming is just as relevant to FreeBSD Ports as it is to Debian.

    And regardless, it's not a commitment Nix shares, and its problems are not problems Nix suffers from. It's not at all inherent to package management, including on Linux. See Nix, Guix, and Spack, for significant, general-purpose, working examples that don't fundamentally rely on abstractions like containers for deployment.

    I totally agree with this, though, and so does everyone who's into Nix:

    > Stuffing a million libraries into /usr/lib [...] is bad.

    > I don’t blame Linux for making bad decisions. It was the 80s and no one knew better. But it is indeed an extremely bad set of design decisions. We all live with historical artifacts and cruft. Not everything is a trade off.

  • > Windows and macOS don’t do this. I don’t pollute system32 with a kajillion random ass DLLs.

    You can't be serious. Are you not familiar with the phrase "DLL hell"? Windows applications do indeed put and depend on random ass DLLs in system32 to this day. Install any game, and it will dump random DLLs all over the system. Want to run an app built with Visual C++, or which depends on C++ libraries? Good luck tracking down whatever version of the MSVC runtime you need to install...

    Microsoft and the community realized this is a problem, which is why most Windows apps are now deployed via Chocolatey, Scoop, WinGet, or the MS Store.

    So, again, your argument is nonsensical when focused on Linux. If anything, Linux does this better than other operating systems since it gives the user the choice of how they want to manage applications. You're not obligated to use any specific package manager.

    • > which is why most Windows apps are now deployed via Chocolatey, Scoop, WinGet, or the MS Store

      rofl. <insert meme of Inglorious Bastards three fingers>

      > Good luck tracking down whatever version of the MSVC runtime you need to install...

      Perhaps back in 2004 this was an issue. That was a long time ago.

      You use a lot of relevant buzz words. But it’s kinda obvious you don’t know what you’re talking about. Sorry.

      > Linux does this better than other operating systems since it gives the user the choice of how they want to manage applications

      I would like all Linux programs to reliably run when I try to run them. I do not ever want to track down or manually install any dependency ever. I would like installing new programs to never under any circumstance break any previously installed program.

      I would also like a program compiled for Linux to just work on all POSIX compliant distros. Recompiling for different distros is dumb and unnecessary.

      I’d also like to be able to trivially cross-compile for any Linux target from any machine (Linux, Mac, or windows). glibc devs should be ashamed of what they’ve done.

      1 reply →