← Back to context

Comment by emaginniss

3 years ago

Right, but Linux (the OS) doesn't have unit tests to ensure that changes to the underlying system doesn't break the software on top. Imagine if MS released a new version of Windows and tons of applications stopped functioning. Everyone would blame MS. The Linux community does it all the time and just says that it's the price of progress.

I think the problem is that there isn't really a thing like "Linux the OS"; there's Debian, Ubuntu, Gentoo, Red Hat, and more than I can remember, and they all do things different: sometimes subtly so, sometimes not so subtly. This is quite different from the Windows position where you have one Windows (multiple editions, but still one Windows) and that's it.

This is why a lot of games now just say "tested on Ubuntu XX LTS" and call it a day. I believe Steam just ships with half an Ubuntu system for their Linux games and uses that, even if you're running on Arch Linux or whatnot.

This has long been both a strong and weak point of the Linux ecosystem. On one hand, you can say "I don't want no stinkin' systemd, GNU libc, and Xorg!" and go with runit, musl, and Wayland if you want and most things still work (well, mostly anyway), but on the other hand you run in to all sort of cases where it works and then doesn't, or works on one Linux distro and not the other, etc.

I don't think there's clean solution to any of these issues. Compatibility is the one of the hard problems in computers because there is no solution that will satisfy everyone and there are multiple reasonable positions, all with their own trade-offs.

So, I very much agree with mike_hearn, their description of how glibc is backwards compatible in theory due to symbol versioning matches my understanding of how glibc works, and their lack of care to test if glibc stays backwards compatible in practice seems evident. They certainly don't seem to do automated UI tests against a suite of representative precompiled binaries to ensure compatibility.

However, I don't understand where unit testing comes in. Testing that whole applications keep working with new glibc versions sounds a lot like integration testing. What's the "unit" that's being tested when ensuring that the software on top of glibc doesn't break?

The Linux Kernel does have tests, and many of the apps on top have unit tests too.

> Imagine if MS released a new version of Windows and tons of applications stopped functioning. Everyone would blame MS.

I don't have to imagine, this literally happens every Windows release.