Comment by flohofwoe
12 hours ago
Dynamic libraries make a lot of sense as operating system interface when they guarantee a stable API and ABI (see Windows for how to do that) - the other scenarios where DLLs make sense is for plugin systems. But that's pretty much it, for anything else static linking is superior because it doesn't present an optimization barrier (especially for dead code elimination).
No idea why the glibc can't provide API+ABI stability, but on Linux it always comes down to glibc related "DLL hell" problems (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).
TL;DR: It's not static vs dynamic linking, just glibc being a an exceptionally shitty solution as operating system interface.
Static linking is also an optimization barrier.
LTO is really a different thing, where you recompile when you link. You could technically do that as part of the dynamic linker too, but I don't think anyone is doing it.
There is a surprisingly high number of software development houses that don't (or can't) use LTO, either because of secrecy, scalability issues or simply not having good enough build processes to ensure they don't breach the ODR.
> (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).
In the era of containers, I do not understand why this is "Not trivial". I could do it with even a chroot.
Linking against an older glibc means setting up an older distribution and accepting all the outdated toolchains and libraries that come with it. Need to upgrade? Get ready to compile everything from source and possibly bootstrap a toolchain. I wouldn't call this trivial.
The fact that you need to use a container/chroot on Linux in the first place makes the process non trivial, when all you have to do on Windows is click a button or two.
Genuine question - are there examples (research? old systems?) of the interface to the operating system being exposed differently than a library? How might that work exactly?
Strictly speaking linux's operating system APIs are exposed via the ABI. That nearly everyone uses glibc is mostly a historical artifact of GNU being the most ubiquitous userspace. The kernel has a nolibc library that basically is the linux stuff without any of the libc.
I do not think it is difficult compiling against versions by using a container.