Comment by captainmuon
3 years ago
The ABI of the Linux kernel seems reasonably stable. Somebody should write a new dynamic linker that lets you easily have multiple versions of libraries - even libc - around. Then its just like windows where you have to install some weird MSVC runtimes to play old games.
Or, GNU could just recognise their extremely central position in the GNU/Linux ecosystem and just not. break. everything. all. the. time.
It honestly really shouldn't be this hard, but GNU seems to have an intense aversion towards stability. Maybe moving to LLVM's replacements will be the long-term solution. GNU is certainly positioning itself to become more and more irrelevant with time, seemingly intentionally.
The issue is more subtle than that. The GNU and glibc people believe that they provide a very high level of backwards compatibility. They don't have an aversion towards stability and in fact, go far beyond most libraries by e.g. providing old versions of symbols.
The issue here is actually that app compatibility is something that's hard to do purely via theory. The GNU guys do compatibility on a per function level by looking at a change, and saying "this is a technical ABI break so we will version a symbol". This is not what it takes to keep apps working. What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something. And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.
Linux is really hurt here by the total lack of any unit testing or UI scripting standards. It'd be very hard to mass test software on the scale needed to find regressions. And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic. As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem. Actual users don't count for much. It's probably inevitable in any system that isn't driven by a profit motive.
I think part of the problem is that by default you build against the newest version of symbols available on your system. So it's real easy when you're working with code to commit yourself to some symbols you may not even need; there's nothing like Microsoft's "target a specific version of the runtime".
3 replies →
> What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something.
There are already sophisticated binary analysis tools for detecting ABI breakages, not to mention extensive guidelines.
> And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic.
Vendors like Red Hat are extremely attentive towards their customers. But if you're not paying, then you only deserve whatever attention they choose to give you.
> As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem.
This is false. Actual problems get fixed, and very quickly at that.
Normally the issues are from proprietary applications that were buggy to begin with, and never bothered to read the documentation. I'd say to a paying customer that if a behaviour is documented, it's their problem.
14 replies →
Well you talk about Windows, that was true in the pre-Windows 8 era. Have you used Windows recently?
I bought a new laptop, and decided to give Windows a second chance. With Windows 11 installed, there were a ton of things that didn't worked. To me it was not acceptable for a 3000$ laptop. Problems with drivers, blue screens of death, applications what just didn't run properly (and commonly used applications, not something obscure). I never had these problems with Linux.
I mean we talk about Windows that is stable mostly because we use Windows versions after they are out since 5 years and most of the problems were fixed. Good, now companies are finishing the transition to Windows 10, not Windows 11, after staying with Windows 7 for years. After 10 years they will probably move to Windows 11, when most of its bug are fixed.
If you use a rolling-release Linux distro, such as ArchLinux, some problems on new software are expected. It's the equivalent of using an insider build of Windows, with the difference that ArchLinux is mostly usable as a daily OS (it requires some knowledge to solve the problems that inevitably arrive, but I used it for years). If you use let's say Ubuntu LTS, you don't have these kind of problems, and it mostly run without any issue (less issues than Windows for sure).
By the way, maintaining compatibility has a cost: have you ever wandered because a full installation of Ubuntu that is a complete system with all the program that you use, an office suite, driver for all the hardware, multimedia players, etc is less than 5Gb while a fresh install of Windows is minimum 30Gb but I think nowadays even more?
> And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.
Never saw Microsoft do that: whey will simply say that it's not compatible and the software vendor has to update. That is not a problem by the way... OS developer should move along and can't maintain backward compatibility forever.
> The GNU and glibc people believe that they provide a very high level of backwards compatibility.
That is true. It's mostly backward compatible, having a 100% backward compatibility is not possible. Problems are fixed as they are detected.
> What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something.
There is one issue: GNU can't test non-free software for obvious licensing and policy issues (i.e. an association that endorses free software can't buy licenses of proprietary software to test it). So a third party should test it and report problems in case of broken backward compatibility.
Keep in mind that binary compatibility is something that is not fundamental on Linux, since it's assumed that you have the source code of everything and in case you recompile the software. GNU/Linux born as a FOSS operating system, and was never designed to run proprietary software on it. There are edge cases where you need to run a binary for other reasons (you lost the source code, compiling it is complicated or takes a lot of time) but surely are edge cases and not a lot of time should be spent to address them.
Beside that glibc it's only one of the possible libc that you can use on Linux: if you are developing proprietary software in my opinion you should use MUSL libc, it has a MIT license (so you can statically link it into your proprietary binary) and it's 100% POSIX compliant. Surely glibc has more feature, but probably your software doesn't use them.
Another viable option is to distribute your software with one of the new packaging formats that are in reality containers: snap, flatpack, appimage. That allows you to distribute the software along with all the dependencies and don't worry about ABI incompatibility.
2 replies →
19 replies →
GNU / glibc is _hardly_ the problem regarding ABI stability. TFA is about a library trying to parse executable files, so it's kind of a corner case; hardly representative.
The problem when you try to run a binary from the 90s on Linux is not glibc. Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.
> Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.
I have it running on an up to date system. There is definitely an issue that it's a pain to get working, especially for people not familiar with the cli or ldd and such, as it wants a few things that are not here by default. But once you get it the few libs it needs and ossp to emulate the missing oss in the kernel, there is no issue with gameplay, graphics or audio aside from the intro video that doesn't run.
So I guess the issue is that the compatibility is not user friendly ? Not sure how that should be fixed though. Even if Loki had shipped all the needed lib with the program, it would still be an issue not to have sound due to distro making the choice of not building oss anymore.
2 replies →
Windows installs those MSVC runtimes via windows update for the last decade.
With Linux, ever revision of gcc has its own glibcxx, but distros don't keep those up to date. So that you'll find that code built with even an old compiler (like gcc10) isn't supported out of the box.
I read "old compiler" and thought you meant something like GCC 4.8.5, not something released in 2020!
The Linux kernel ABIs are explicitly documented as stable. If they change and user space programs break, it's a bug in the kernel.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
Someone should invent a command to change root… we should call it chroot!
The article seems to document ways in which it isn't. I have no idea personally, are these just not really practical problems?
The article is talking about userland, not the kernel's ABI.
Sounds like you want Flatpak, Docker or Snap :)
Just use nix.