Comment by einpoklum
13 hours ago
This seems interesting even regardless of go. Is it realistic to create an executable which would work on very different kinds of Linux distros? e.g. 32-bit and 64-bit? Or maybe some general framework/library for building an arbitrary program at least for "any libc"?
Cosmopolitan goes one further: [binaries] that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS on AMD64 and ARM64
https://justine.lol/cosmopolitan/
>Linux
if you configure binfmt_misc
>Windows
if you disable Windows Defender
>OpenBSD
only older versions
Yeah while APE is a technically impressive trick, these issues far outweigh the minor convenience of having a single binary.
For most cases, a single Windows exe that targets the oldest version you want to support plus a single Glibc binary that dynamically links against the oldest version you want to support and so on is still the best option.
>> Linux
> if you configure binfmt_misc
I don't think that's a requirement, it'll just fall back to the shell script bootstrap without it.
3 replies →
Clearly a joke if it uses the .lol tld.
It's his personal website lol.
3 replies →
Appimage exists that packs linux applications into a single executable file that you just download and open. It works on most linux distros
I vaguely remember that Appimage-based programs would fail for me because of fuse and glibc symbol version incompatibilties.
Gave up them afterwards. If I need to tweak dependencies might as well deal with the packet manager of my distro.
Yup. Just compile it as static executable. Static binaries are very undervalued imo.
As TFA points out at the beginning, it's not so simple if you want to use the GPU.
The "just" is doing a lot of heavylifting here (as detailed in the article), especially for anything that's not a trivial cmdline tool.
In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.
If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?
2 replies →
Ack. I went down that rabbit hole to "just" build a static Python: https://beza1e1.tuxen.de/python_bazel.html
We had a time when static binaries where pretty much the only thing we had available.
Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.
Got to put that RAM to use.
The thing with static linking is that it enables aggressive dead code elimination (e.g. DLL are a hard optimization barrier).
Even with multiple processes sharing the same DLL I would be surprised if the alternative of those processes only containing the code they actually need would increase RAM usage dramatically, especially since most processes that run in the background on a typical Linux system wouldn't event even need to go through glibc but could talk directly to the syscall interface.
DLLs are fine as operating system interface as long as they are stable (e.g. Windows does it right, glibc doesn't). But apart from operating system interfaces and plugins, overusing dynamic linking just doesn't make a lot of sense (like on most Linux systems with their package managers).
1 reply →
I've been static linking my executables for years. The downside, that you might end up with an outdated library, is no match for the upsite: just take the binary and run it. As long as you're the only user of the system and the code is your own you're going to be just fine.
I don't think dynamic libraries fail at "utilizing" any available RAM.
3 replies →