← Back to context

Comment by crote

16 hours ago

I don't think this makes a meaningful difference. The installation is a `curl | sh`, which downloads a tarball, which gets extracted to some directory in $PATH.

It currently includes two executables, but having it contain two executables and a bunch of .so libraries would be a fairly trivial change. It only gets messy when you want it to make use of system-provided versions of the libraries, rather than simply vendoring them all yourself.

It gets mess not just in that way but also someone can have a weird LD_LIBRARY_PATH that starts to have problems. Statically linking drastically simplifies distribution and you’ve had to have distributed 0 software to end users to believe otherwise. The only platform this isn’t the case for is Apple because they natively supported app bundles. I don’t know if flat pack solves the distribution problem because I’ve not seen a whole lot of it in the ecosystem - most people seem to generally still rely on the system package manager and commercial entities don’t seem to really target flat pack.

  • When you're shipping software, you have full control over LD_LIBRARY_PATH. Your entry point can be e.g. a shell script that sets it.

    There is not so much difference between shipping a statically linked binary, and a dynamically linked binary that brings its own shared object files.

    But if they are equivalent, static linking has the benefit of simplicity: Why create and ship N files that load each other in fancy ways, when you can do 1 that doesn't have this complexity?

    • That’s precisely my point. It’s insanely weird to have a shell script to setup the path for an executable binary that can’t do it for itself. I guess you could go the RPATH route but boy have I only experienced pain from that.

      1 reply →