← Back to context

Comment by flohofwoe

14 hours ago

Dynamic libraries make a lot of sense as operating system interface when they guarantee a stable API and ABI (see Windows for how to do that) - the other scenarios where DLLs make sense is for plugin systems. But that's pretty much it, for anything else static linking is superior because it doesn't present an optimization barrier (especially for dead code elimination).

No idea why the glibc can't provide API+ABI stability, but on Linux it always comes down to glibc related "DLL hell" problems (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).

TL;DR: It's not static vs dynamic linking, just glibc being a an exceptionally shitty solution as operating system interface.

Static linking is also an optimization barrier.

LTO is really a different thing, where you recompile when you link. You could technically do that as part of the dynamic linker too, but I don't think anyone is doing it.

There is a surprisingly high number of software development houses that don't (or can't) use LTO, either because of secrecy, scalability issues or simply not having good enough build processes to ensure they don't breach the ODR.

> (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).

In the era of containers, I do not understand why this is "Not trivial". I could do it with even a chroot.

  • Linking against an older glibc means setting up an older distribution and accepting all the outdated toolchains and libraries that come with it. Need to upgrade? Get ready to compile everything from source and possibly bootstrap a toolchain. I wouldn't call this trivial.

    The fact that you need to use a container/chroot on Linux in the first place makes the process non trivial, when all you have to do on Windows is click a button or two.

    • Wouldn't you target whatever is the minimum "supported" glibc you want to run in the first place? What is that you need to recompile?

      Chroot _is_ trivial. I actually use it for convenience, as I could also as well install the older toolchains directly on the newer system, but chroot is just plain easier. Maybe VS has a button where you can target whatever version MS fancies today ("for a limited time offer"), but what about _any other_ windows toolchain?

Genuine question - are there examples (research? old systems?) of the interface to the operating system being exposed differently than a library? How might that work exactly?

I do not think it is difficult compiling against versions by using a container.