Comment by akdev1l

23 days ago

People always say this to shit on glibc meanwhile those guys bend over backwards to provide strong API compatibilities. It rubs me off the wrong way.

What glibc does not provide is forward compatibility. An application built with glibc 2.12 will not necessarily work with any older version.

Such application could be rebuilt to work with an older glibc as the API is stable. The ABI is not which is why the application would need to be rebuilt.

glibc does not provide ABI compatibility because from their perspective the software should be rebuilt for newer/older versions as needed. Maintaining a stable ABI mostly helps proprietary software where the source is not available for recompilation. Naturally the gnu guys building glibc don’t care about that use case much.

I guess you didn’t mention glibc in your comment but I already typed this out

> What glibc does not provide is forward compatibility. An application built with glibc 2.12 will not necessarily work with any older version.

Is this correct? I think you perhaps have it backward? If I compile something against the glibc on my system (Debian testing), it may fail to run on older Debian releases that have older glibc versions. But I don't see why an app built against glibc 2.12 wouldn't run on Debian testing. glibc actually does a good job of using symbol versioning, and IIRC they haven't removed any public functions, so I don't see why this wouldn't work.

More at issue would be the availability of other dependencies. If that old binary compiled against glibc 2.12 was also linked with, say, OpenSSL 0.9.7, I'd have to go out and build a copy of that myself, as Debian no longer provides it, and OpenSSL 3.x is not ABI-compatible.

> glibc does not provide ABI compatibility because from their perspective the software should be rebuilt for newer/older versions as needed.

If true (I don't think it is), that is a hard showstopper for most companies that want to develop for Linux. And I wouldn't blame them.

  • I don't know what the official policy is, but glibc uses versioned symbols and certainly provides enough ABI backward-compatibility that the Python package ecosystem is able to define a "manylinux" target for prebuilt binaries (against an older version of glibc, natch) that continues to work even as glibc is updated.

  • Sorry I am not sure if 2.12 is a a recent release or older, I made up this number up

    If the application is built against 2.12 it may link against symbols which are versioned 2.12 and may not work against 2.11 - the opposite (building against 2.11 and running on 2.12) will work

    >If true (I don't think it is), that is a hard showstopper for most companies that want to develop for Linux.

    Not really a show stopper, vendors just do what vendors do and bundle all their dependencies in. Similar to windows when you use anything outside of the win32 API.

    The only problem with this approach is that glibc cannot have multiple versions running at once. We have “fixed” this with process namespaces and hence containers/flatpak where you can bundle everything including your own glibc.

    Naturally the downside is that each app bundles their own libraries.

    • The only problem with this approach is that glibc cannot have multiple versions running at once

      that's not correct. libraries have versions for a reason. the only thing preventing the installation of multiple glibc versions is the package manager or the package versioning.

      this makes building against an older version of glibc non-trivial, because there isn't a ready made package that you can just install. the workarounds take effort:

      https://stackoverflow.com/questions/2856438/how-can-i-link-t...

      the problem for companies developing on linux is that it is not trivial

      4 replies →

  • MUSL is a better libc for companies making proprietary binaries. They can either statically link it, or provide a .so with the musl version they want their programs to use & dynamically link that.

No other operating system works like this. Supporting older versions of an OS or runtime with a compiler toolchain a standard expectation of developers.

  • Plenty of operating systems work like this. Just not highly commercial ones because proprietary software is the norm on those.

    From a bit of research it looks like FreeBSD for example only provides a stable ABI within minor versions and I imagine if you build something for FreeBSD 14 it won’t work on 13.

    Stable ABI literally only benefits software where the user doesn’t have the source. Any operating system which assumes you have the source will not prioritize it.

    (Edit: actually thinking harder MacOS/iOS is actually much worse on binary compatibility, as for example Intel binaries will stop working entirely due to M-cpu transition - Apple just hits developers with a stick to rebuild their apps)

    • Yes, and this is a great reason why FreeBSD isn't a popular gaming platform, or for proprietary software in general. I'm not saying this is a bad thing, but... that's why.

      > Stable ABI literally only benefits software where the user doesn’t have the source.

      It also benefits people who don't want to have to do busywork every time the OS updates.

      1 reply →

    • > Stable ABI literally only benefits software where the user doesn’t have the source

      Stable ABI benefits everyone. If I need to recompile a hundred packages with every OS update instead of doing real work then there's something seriously wrong with my OS.

  • what about mac os?

    • macOS doesn't require developers to rebuild apps with each major OS release, as long as they link with system libraries and don't try to (for example) directly make syscalls.

      Apple may require rebuilds at some point for their Mac Store (or whatever they call it), but it's not required from a technical perspective.

      The one exception here is CPU architecture changes, and even then, Apple has provided seamless emulation/translation layers that they keep around for quite a few years before dropping support.

      1 reply →

I am sorry, I did not mean to imply anyone else is doing something poorly. I believe glibc's (and the rest of the ecosystem of libraries that are probably more limiting) policies and principled stance are quite correct and overall "good for humanity". But as you mentioned, they are inconvenient for a gamer that just wants to run an executable from 10 years ago (for which the source was lost when the game studio was bought).

  • that 10 year old binary should run, unless it links against a library that no longer exists.

    for example here is a 20 year old binary of the game mirrormagic that runs just fine on my modern fedora machine:

        ~/Downloads/mirrormagic-2.0.2> ldd mirrormagic
            linux-gate.so.1 (0xf7f38000)
            libX11.so.6 => /lib/libX11.so.6 (0xf7db5000)
            libm.so.6 => /lib/libm.so.6 (0xf7cd0000)
            libc.so.6 => /lib/libc.so.6 (0xf7ad5000)
            libxcb.so.1 => /lib/libxcb.so.1 (0xf7aa9000)
            /lib/ld-linux.so.2 (0xf7f3b000)
            libXau.so.6 => /lib/libXau.so.6 (0xf7aa4000)
        ~/Downloads/mirrormagic-2.0.2> ls -la mirrormagic
        -rwxr-xr-x. 1 em-bee em-bee 203633 Jun  7  2003 mirrormagic
    

    ok, there are some issues: the sound is not working, and the resolution does not scale. but there are no issues with linked libraries.

This a toolchain issue rather than OS issue. This wounldn't have been a problem if gcc/clang just took a --stdlib-version option and built the executables linking to that version of glibc or equivalent.