Comment by DrScientist

4 months ago

And what was the reason for the dependency hell?

Was it always so hard to build the software you needed on a single system?

Because our computers have global state all over the place, and people like it, as it simplifies a lot of things.

You could see that history repeat itself in Python - "pip install something" is way easier to do that messing with virtualenvs, and even works pretty well as long as number of package is small, so it was a recommendation for a long time. Over time, as number of Python apps on same PC grew, and as the libraries gained incompatible versions, people realized it's a much better idea to keep all things isolated in its own virtualenv, and now there are tools (like "uv" and "pipx") which make it trivial to do.

But there are no default "virtualenvs" for regular OS. Containers get closest. nix tries hard, but it is facing uphill battle - it goes very much "against the grain" of *nix systems, so every build script of every used app needs to be updated to work with it. Docker is just so much easier to use.

Golang has no dynamic code loading, so a lot of times it can be used without containers. But there is still global state (/etc/pki, /etc/timezone, mime.types , /usr/share/, random Linux tools the app might call on, etc...) so some people still package it in docker.

No. Back before dynamic objects, for instance, it was easier-of course, there were other challenges at the time.

  • So perhaps the Linux choice of dynamic by default is partly to blame for dependency hell, and thus the rise of cloning entire systems to isolate a single program?

    Ironically one of the arguments for dynamic linking is memory efficiency and small exec size ( the other is around ease of centrally updating - say if you needed to eliminate a security bug ).