← Back to context

Comment by amelius

13 hours ago

Is there a tool that takes an executable, collects all the required .so files and produces either a static executable, or a package that runs everywhere?

There are things like this.

The things I know of and can think of off the top of my head are:

1. appimage https://appimage.org/

2. nix-bundle https://github.com/nix-community/nix-bundle

3. guix via guix pack

4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )

5. A docker image (a package that runs everywhere, assuming a docker runtime is available)

6. https://flatpak.org/

7. https://en.wikipedia.org/wiki/Snap_(software)

AppImage is the closest to what you want I think.

  • It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods and also very big for typical systems which include most libraries. They're good as a "compile once, run everywhere" approach but you're really accommodating edge cases here.

    A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?

    Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.

    • IMO one of the best features of AppImage is that it makes it easy to extract without needing external tools. It's usually pretty easy for me to look at an AppImage and write a PKGBUILD to make a native Arch package; the format already encodes what things need to be installed where, so it's only a question of whether the libraries it contains are the same versions of what I can pull in as dependencies (either from the main repos or the AUR). If they are, my job is basically already done, and if they aren't, I can either choose to include them in the package itself assuming I don't have anything conflicting (which is fine for local use even if it's not something that's usually tolerated when publishing a package) or stick with using the AppImage.

      1 reply →

    • > It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods

      'Noticeably slower' at what? I've run, e.g. xemu (original xbox emulator) as both manually built from source and via AppImage-based released and i never noticed any difference in performance. Same with other AppImage-based apps i've been using.

      Do you refer to launching the app or something like that? TBH i cannot think of any other way an AppImage would be "slower".

      Also from my experience, applications released using AppImages has been the most consistent by far at "just working" on my distro.

  • I wish AppImage was slightly more user friendly and did not require the user to specifically make it executable.

    • We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.

      Been doing it this way for years now, so it's well battle tested.

      1 reply →

  • AppImage looks like what I need, thanks.

    I wonder though, if I package say a .so file from nVidia, is that allowed by the license?

    • AppImage is not what you need. It's just an executable wrapper for the archive. To make the software cross-distro, you need to compile it manually on an old distro with old glibc, make sure all the dependencies are there, and so on.

      https://docs.appimage.org/reference/best-practices.html#bina...

      There are several automation tools to make AppImages, but they won't magically allow you to compile on the latest Fedora and expect your executable to work on Debian Stable. It's still require quite a lot of manual labor.

      1 reply →

    • Typically appimage packaging excludes the .so files that are expected to be provided by the base distro.

      Any .so from nvidia is supposed to be one of those things. Because it also depends on the drivers etc.. provided by nvidia.

      Also on a side note, a lot of .so files also depends on other files in /usr/share , /etc etc...

      I recommend using an AppImage only for the happy path application frameworks they support (eg. Qt, Electron etc...). Otherwise you'd have to manually verify all the libraries you're bundling will work on your user's distros.

    • >I wonder though, if I package say a .so file from nVidia, is that allowed by the license?

      It won't work: drivers usually require exact (or more-or-less the same) kernel module version. That's why you need to explicitly exclude graphics libraries from being packaged into AppImage. This make it non-runnable on musl if you're trying to run it on glibc.

      https://github.com/Zaraka/pkg2appimage/blob/master/excludeli...

    • Don't forget - AppImage won't work if you package something with glibc, but run on musl/uclibc.

    • Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.

      You generally still also have to abide by license obligations for OSS too, e. G., GPL.

      To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.

15-30 years ago I managed a lot of commercial chip design EDA software that ran on Solaris and Linux. We had wrapper shell scripts for so many programs that used LD_LIBRARY_PATH and LD_PRELOAD to point to the specific versions of various libraries that each program needed. I used "ldd" which prints out the shared libraries a program uses.

I don't think it's as simple as "run this one thing to package it", so if the process rather than the format is what you're looking for, this won't work, but that sounds a lot like how AppImages work from the user perspective. My understanding is that an AppImage is basically a static binary paired with a small filesystem image containing the "root" for the application (including the expected libraries under /usr/lib or wherever they belong). I don't line everything about the format, but overall it feels a lot less prescriptive than other "universal" packages like flatpak or snap, and the fact that you can easily extract it and pick out the pieces you want to repackage without needing any external tools (there are built-in flags on the binary like --appimage-extract) in helps a lot.

  mkdir chroot
  cd chroot
  for lib in $(ldd ${executable} | grep -oE '/\S+'); do
    tgt="$(dirname ${lib})"
    mkdir -p .${tgt}
    cp ${lib} .${tgt}
  done
  mkdir -p .$(dirname ${executable})
  cp ${executable} .${executable}
  tar cf ../chroot-run-anywhere.tgz .

  • You're supposed to do this recursively for all the libs no?

    Eg. Your App might just depend on libqt5gui.so but that libqt5gui.so might depend on some libxml etc...

    Not to mention all the files from /usr/share etc... That your application might indirectly depend on.

You can "package" all .so files you need into one file, there are many tools which do this (like a zip file).

But you can't take .so files and make one "static" binary out of them.

  • > But you can't take .so files and make one "static" binary out of them.

    Yes you can!

    This is more-or-less what unexec does

    - https://news.ycombinator.com/item?id=21394916

    For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.

    But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!

    [1]: ASLR would be one of those things...

    • What if the library you use calls dlopen later? That’ll fail.

      There is no universal, working way to do it. Only some hacks which work in some special cases.

      1 reply →

  • Well not a static binary in the sense that's commonly meant when speaking about static linking. But you can pack .so files into the executable as binary data and then dlopen the relevant memory ranges.

    • Yes, that's true.

      But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.

      These are just strange and confusing from the end users' perspective.

      4 replies →

I don't think you can link shared objects into a static binary because you'd have to patch all instances where the code reads the PLT/GOT, but this can be arbitrarily mangled by the optimizer, and turn them back into relocations for the linker to then resolve them.

You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.

edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.