Comment by Jeaye
2 days ago
I don't understand why they don't just statically link their binaries. First, they said this:
> Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime.
But then they immediately said they actually statically link all of their deps aside from libc.
> Instead, we take a different approach: statically linking everything we can.
If they're statically linking everything other than libc, then using musl or statically linking glibc will finish the job. Unless they have some need for loading share libs at runtime which they didn't already have linked into their binary (i.e. manual dlopen), this solves the portability problem on Linux.
What am I missing (assuming I know of the security implications of statically linked binaries -- which they didn't mention as a concern)?
And please, statically linking everything is NOT a solution -- the only reason I can run some games from 20 years ago still on my recent Linux is because they didn't decide to stupidly statically link everything, so I at least _can_ replace the libraries with hooks that make the games work with newer versions.
As long as the library is available.
Neither static nor dynamic linking is looking to solve the 20 year old binaries issue, so both will have different issues.
But I think it's easier for me to find a 20 year old ISO of a Red Hat/Slackware where I can simply run the statically linked binary. Dependency hell for older distros become really difficult when the older packages are not archived anywhere anymore.
I've recently had to do this (to bisect when a change introduced a superficial bug into a 20-year-old program). I think "simply run" is viewing Linux of that era through rose-tinted glasses.
Even for simple 2D "Super VGA" you're needing to choose the correct XFree86 implementation and still tweak your Xorg configuration. The emulated hardware also has bugs, since most of the focus is now on virtio drivers.
(The 20-year-old program was linked against libsdl, which amusingly means on my modern system it supports Wayland with no issues.)
1 reply →
It's interesting to think how a 20 year old OS plus one program is probably a smaller bundle size than many modern Electron apps ostensibly built "for cross platform compatibility". Maybe microkernels are the way.
5 replies →
Debian archives all of our binaries (and source) here:
https://snapshot.debian.org/
Some things built on top of that:
https://manpages.debian.org/man/debsnap https://manpages.debian.org/man/debbisect https://wiki.debian.org/BisectDebian https://metasnap.debian.net/ https://reproduce.debian.net/
5 replies →
Software running for 20 years is not always a reasonable requirement.
But sometimes it is. And even if it's not a requirement, it might be nice to have.
How do you troubleshoot and figure that out?
Dynamic linking obviously has benefits or there would not be any incentive to build them or provide the capacity for them.
The problem is they also have problems which motivates people to statically link.
I remember back in the Amiga days when there were multiple libraries that provided file requesters. At one point I saw a unifying file requester library that implemented the interfaces of multiple others so that all requesters had the same look.
It's something that hasn't been done as far as I am aware on Linux. partially because of the problems with Linux dynamic libraries.
I think the answer isn't just static linking.
I think the solution is a commitment.
If you are going to make a dynamic library, commit to backwards compatibility. If you can't provide that, that's ok, but please statically link.
Perhaps making a library at a base level with a forever backwards compatible interface with a static version for breaking changes would help. That might allow for a blend of bug support and adding future features.
At least for some apps, perhaps it’s Wine and the Win32 API which is the answer.
OpenGL and Vulkan are provided by the GPU vendor you can't statically link them.
Static linking makes it impossible for a library to evolve external to applications. That’s not a great outcome for a lot of reasons.
musl and glibc static links are their own Pandora’s box of pain and suffering. They don’t “just work” like you’d hope and dream.
This blows my mind, that in 2025 we still struggle with a simple task such as "read in a string, parse it, and query a cascade of resolvers to discover it's IP". I just can't fathom how that is a difficult problem, or why DNS still is notorious for causing so much pain and suffering. Compared to the advancements in hardware and graphics and so many other areas.
There are resolvers for not just DNS but for users and other lookups. The list of resolvers is dynamic, they are configured in /etc/nsswitch.conf. The /etc/hosts lookup is part of the system.
Where do the resolvers come from? It needs to be possible to install resolvers separately and dynamically load them. Unless you want to have NIS always installed. Better to install LDAP for those who need it.
1 reply →
Various things including name (DNS) resolution rely on dynamic linking.
Are you saying that a statically linked binary cannot make an HTTP request to `google.com` because it would be unable to resolve the domain name?
There are entire distros, like alpine, built on musl. I find this very hard to believe.
All versions of MUSL prior to 1.2.4 (released less than two years ago) would indeed fail to perform DNS lookups in many common cases, and a lot of programs could not run in MUSL as a result. (I'm not aware of what specific deficiencies remain in MUSL, but given the history even when there are explicit standards, I am confident that there are more.) This wasn't related to dynamic linking though.
Glibc's NSS is mostly relevant for LANs. Which is a lot of corporate and home networks.
You have to bundle your own resolver into your application. But here's the rub, users expect your application to respect nsswitch which requires loading shared libs which execute arbitrary code. How Go handles this is somewhat awkward. They parse /etc/nsswitch and decide if they can cheat and use their own resolver based on what modules they see[1]. Otherwise they farm out to cgo to go through glibc.
[1] They're playing with fire here because you can't really assume to know for sure how the module 'dns' behaves. A user could replace the lib that backs it with their own that resolves everything to zombo.com. It would be one thing if nsswitch described behavior which was well defined and could be emulated but it doesn't, it specifies a specific implementation.
1 reply →
The configuration of DNS resolution on Linux is quite complicated [1]. Musl just ignores all that. You can build a distro that works with musl, but a static musl binary dropped into an arbitrary Linux system won't necessarily work correctly.
[1]: https://news.ycombinator.com/item?id=43451861
The easy and conforming way to do that would be to call "getent hosts google.com" and use the answer. But this only works for simple use cases where you just need some IPv4/IPv6 address, you can't get other kinds of DNS records like MX or TLSA this way.