← Back to context

Comment by linksnapzz

4 months ago

No. Back before dynamic objects, for instance, it was easier-of course, there were other challenges at the time.

So perhaps the Linux choice of dynamic by default is partly to blame for dependency hell, and thus the rise of cloning entire systems to isolate a single program?

Ironically one of the arguments for dynamic linking is memory efficiency and small exec size ( the other is around ease of centrally updating - say if you needed to eliminate a security bug ).

  • See...there's the thing; dynamic linking was originally done by Unixen in the '80s, way before Linux, as a way to cope w/ original X11 on machines that had only 2-4MB of RAM.

    X was (in)famous for memory use (see the chapter in the 'Unix-Hater's Handbook'); and shared libs was the consensus as to how to make the best of a difficult situation, see:

    http://harmful.cat-v.org/software/dynamic-linking/

    • According to your link ( great link BTW ) Rob Pike said dynamic linking for X was a net negative on memory and speed and only had a tiny advantage in disk space.

      My preference is to bring dependencies in at the source code level and compile them in to the app - stops the library level massive dependency trees ( A need part of B but because some other part of B needs C our dependency tool brings in C, and then D and so on ).

      1 reply →