← Back to context

Comment by iberator

12 hours ago

Yup. Just compile it as static executable. Static binaries are very undervalued imo.

The "just" is doing a lot of heavylifting here (as detailed in the article), especially for anything that's not a trivial cmdline tool.

  • In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.

    If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?

    • It becomes tricky when you need to use system DLLs like X11 or GL/Vulkan (so you need to use the 'hacks' described in the article to work around that) - the problem is that those system DLLs then bring a dynamically linked glibc into the process, so suddenly you have two C stdlibs running side by side and the question is whether this works just fine or causes subtle breakage under the hood (e.g. the reason why MUSL doesn't implement dlopen).

      E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.

      1 reply →

We had a time when static binaries where pretty much the only thing we had available.

Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.

Got to put that RAM to use.

  • The thing with static linking is that it enables aggressive dead code elimination (e.g. DLL are a hard optimization barrier).

    Even with multiple processes sharing the same DLL I would be surprised if the alternative of those processes only containing the code they actually need would increase RAM usage dramatically, especially since most processes that run in the background on a typical Linux system wouldn't event even need to go through glibc but could talk directly to the syscall interface.

    DLLs are fine as operating system interface as long as they are stable (e.g. Windows does it right, glibc doesn't). But apart from operating system interfaces and plugins, overusing dynamic linking just doesn't make a lot of sense (like on most Linux systems with their package managers).

    • While at the same time it prevents extending applications, the alternatives being multiple processes using OS IPC, all of them much slower and heavier on resources than an indirect call on a dynamic library.

      We started there in computing history, and outside Linux where this desire to go to the past prevails, moved on to better ways including on other UNIX systems.

  • I've been static linking my executables for years. The downside, that you might end up with an outdated library, is no match for the upsite: just take the binary and run it. As long as you're the only user of the system and the code is your own you're going to be just fine.

  • I don't think dynamic libraries fail at "utilizing" any available RAM.

    • Think of any program that uses dynamic libraries as extension mechanism, and now replace it with standard UNIX processes, each using any form of UNIX IPC to talk with the host process instead.

      2 replies →