Many moons ago, one of the things I did was to port the Windows version of Google Earth to both Mac and Linux. I did the mac first, which was onerous, because of all the work involved in abstracting away system specific API's, but once that was done, I thought Linux would be a lesser task, and we hired a great linux guy to help with that.
Turns out, while getting it running on linux was totally doable, getting it distributed was a completely different story. Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base? The first approach is a lot of work, and suffers from breakages from time to time, and you alienate users not on your list of supported distros. The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.
End result; management canned the Linux version because too much ongoing support work was required, and no matter what you did, you got hate mail from Gentoo users.
> Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that?
I build on an distro with an old enough glibc following this table: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa... (right now rockylinux:8 which is equivalent to centos:8 and good enough for debian stable and anything more recent than that ; last year I was still on centos:7), use dlopen as much as possible instead of "normal" linking and then it works on the more recent ones without issues.
I worked on a product that shipped as a closed source binary .so (across four OSes and two architectures) for almost seven years, and that's exactly what we did too — build on the oldest libc and kernel any of your supported distros (or OS versions) support, statically link as much as you can, and be defensive about _any_ runtime dependencies you have.
If what you're doing works for you, great, but in case it stops working at some point (or if for some reason you need to build on a current-gen distro version), you could also consider using this:
It's a set of autogenerated headers that use symbol aliasing to allow you to build against your current version of glibc, but link to the proper older versioned symbols such that it will run on whatever oldest version of glibc you select.
> We need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base?
When I worked on mod_pagespeed we went with the first approach, building an RPM and a DEB. As long as we built on the oldest still-supported CentOS and Ubuntu LTS, 32-bit and 64-bit, we found that our packages worked reliably on all RPM- and DEB-based distros. Building four packages was annoying, but we automated it.
(We also distributed source, so it may be that it didn't work for some people and they instead built from source. But people would usually ask questions on https://groups.google.com/g/mod-pagespeed-discuss before resorting to that I don't think I saw this issue.)
I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.
It is possible, though, that whatever you statically link doesn't work with the running kernel, of course. And there are a lot of variants out there; every distribution has their own patch cadence. (A past example of this was the Go memory corruption issue from 1.13 on certain kernels. 1.14 added various checks for distribution + kernel version to warn people of the issue, and still got it wrong in several cases. Live on the bleeding edge, die on the bleeding edge.)
> I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.
Could it work with gcompat? Alpine has it in the community repo.
Static linking against MUSL only makes sense for relatively simple command line tools. As soon as 'system DLLs' like X11 or GL are involved it's back to 'DLLs all the way down'.
How do Firefox and Blender do it? They just provide compressed archives, which you uncompress into a folder and run the binary, no problem. I myself once had to write a small CLI program in Rust, where I statically linked musl. I know, can't compare that with OpenGL stuff, but Firefox and Blender do use OpenGL (and perhaps even Vulkan these days?).
Firefox maintains an Flatpak package on Flathub. Flatpak uses runtimes to provide a base layer of provided libraries that are the same regardless which distro you use.
with difficulty, and not that well. for example, firefox binaries require gtk built with support for X, despite only actually using wayland at runtime if configured. the reason why people generally don't complain about it is because if you have this sort of weird configuration, you can usually compile firefox yourself, or have it compiled by your distro. with binary-only releases, all complaints (IMO justifiably) go to the proprietary software vendors.
> The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.
That was a shame. There was a lot of hope for LSB, but in the end the execution flopped. I don't know if it would have been possible to make it succeed.
So this sort of bleeds in to the Init Wars, but there's a lot of back and forth about whether LSB flopped or was deliberately strangled by a particular player in the Linux ecosystem.
Yesterday I tried to install an Inkscape plugin I have been using for a long time. I upgraded my system and the plug-in went away. So I download the zip, open Inkscape, open the plugin manager, go to add the new plugin by file manager… and the opened file manager is unable to see my home directory (weird because when opening Inkscape files it can see home, but when installing extensions it cannot). It took some time to figure out how to get the downloaded file in to a folder the Inkscape snap could see. Somehow though I still could not get it installed. Eventually I uninstalled the snap and installed the .deb version. That worked!
Recently I downloaded an AppImage for digiKam. It immediately crashed when trying to open it, because I believe glibc did not work with my system version (a recent stable Ubuntu).
Last week I needed to install a gnome extension. The standard and seemingly only supported way of doing this is to open a web page, install a Firefox extension, and then click a button on the web page to install it. The page told me to install the Firefox extension and that worked properly. Then it said Firefox didn’t have access to the necessary parts of my file system. It turns out FF is a snap and file system access is limited, so the official way of installing the gnome extension doesn’t work. I ended up having to download and install chrome and install the gnome extension from there.
RedHat claims or at least claimed that for EL. I think it’s limited to within minor releases though, with majors being API.
That’s fine if you’re OK relying on their packages and 3rd party “enterprise” software that’s “certified” for the release. No one in their right mind would run RHEL on a desktop.
The most annoying to me was that RHEL6 was still under support and had an ancient kernel that excluded running Go, GRaalVM, etc. static binaries. No epoll() IIRC.
Often times you find yourself having to pull more and more libraries into a build. It all starts with wanting a current Python and before you know it you’re bringing in your own OpenSSL.
And they have no problem changing their system management software in patch releases. They’ve changed priority of config files too many times. But that’s another rant for another day.
This is a place where I wish some BSD won out. With all the chunks of the base userspace + kernel each moving in their own direction it’s impossible to get out of this place. Then add in every permutation of those pieces from the distros.
No, no one does. It's a lot more work to maintain all public APIs and their behavior for all time; it can often prevent even fixing bugs, if some apps come to depend on the buggy behavior. Microsoft would occasionally add API parameters/ options to let clients opt-in to bug fixes, or auto-detect known-popular apps and apply special bug-fix behaviors just for them.
Even Apple doesn't make that level of "unbreakable contract" commitment. Apple will announce deprecations of APIs with two or three years of opportunity to fix them. If apps don't upgrade within the timeframe, they just stop working in newer versions of macOS.
In that context, cannot the issue be sidestepped entirely by statically linking[1] everything you need?
AFAIK the LGPL license even allows you to statically link glibc, as long as you provide a way for your user to load their own version of the libs by themself if that's what they want.
[1]: (or dlopening libs you bundle with your executable)
Another approach might be a hybrid, with a closed-source binary "core", and open-source code and linkage glue between that and OS/other libraries. And an open-source project with one-or-few officially-supported distributions, but welcoming forks or community support of others.
A large surface area app (like Google Earth?) could be less than ideal for this. But I've seen a closed-source library, already developed internally on linux, with a small api, and potential for community, where more open availability quagmired on this seemingly false choice of "which distributions would we support?"
> Due to IP reasons, this can't ship as code, so we need to ship binaries.
Good, it should be as difficult as possible, if not illegal, to ship proprietary crap to Linux. The operating system was always intended to be Free Software. If I cannot audit the code, it’s spyware crap and doesn’t belong in the Linux world anyway.
Loki managed to release binaries for Linux long before Google Earth was a thing. I'm not going to claim that things are/were perfect but you never needed to support each distro individually: Just ship your damned dependencies except for base system stuff like libc and OpenGL which does provide pretty good backwards compatibility so you only need to target the oldest version you want to support and it will work on newer ones as well.
They probably mean the old desktop one that has been re-branded to "Google Earth Pro". The UI looks a decade old but it's still useful for doing more advanced things like taking measurements.
Flatpak solved this issue. You use a "runtime" as the base layer, similar to the initial `FROM` in Dockerfiles. Flatpak runs then the app in a containerized environment.
Agree. Had a few games on Steam crap out with the native version, forced it to use proton with the Windows version, everything worked flawlessly. Developers natively porting to linux seem to be wasting their time.
Funnily enough with wine we kinda recreated the model of modern windows, where Win32 is a personality on top of the NTAPI which then interfaces with the kernel. Wine sits between the application and the zoo of libraries including libc that change all the time.
> Developers natively porting to linux seem to be wasting their time.
Factorio runs so much better than any of this emulationware, it's one of the reasons I love the game so much and gifted licenses for friends using Windows.
Some software claims to support Linux but uses some tricks to avoid recompiling and it's always noticeable, either as lag, as UI quirks, or some features plainly don't work because all the testers were windows users.
Emulating as a quick workaround is all fair game but don't ship that as a Linux release. I appreciate native software (so long as it's not java), and I'm also interested in buying your game if you advertise it as compatible with WINE (then I'm confident that it'll work okay and you're interested in fixing bugs under emulation), just don't mislead and pretend and then use a compatibility layer.
Have you actually tried to run the Windows version of Factorio through Proton and experienced slowdowns? In my experience, WINE doesn't result in a noticeable slowdown compared to running on Windows natively (without "emulationware" as you call it), unless there are issues related to graphics API translation which is a separate topic.
I've been using wine and glibc for almost 20 years now and wine is waaaay more unstable than glibc.
Wine is nice until you try to play with sims3 after updating wine. Every new release of wine breaks it.
Please use wine for more than a few months before commenting on how good it is.
It's normal that every new release some game stops working. Which is why steam offers the option to choose which proton version to use. If they all worked great one could just stick to the last.
As someone who's been gaming on Proton or Lutris + Raw Wine, I'm not sure I agree. I regularly update Proton or Wine without seeing major issues or regressions. It certainly happens sometimes, but I'm not sure it's any worse of a "version binding" problem than a lot of stuff in Linux is. Sure, sometimes you have to specifically use an older version, but getting "native" linux games to work on different GPU architectures or distros is a mess as well, and often involves pinning drivers or dependencies. I've had games not run on my Fedora laptop that run fine on my Ubuntu desktop, but for the most part, Wine or Proton installed things work the same across Linux installs, and often with better performance somehow.
Absolute opposite experience for me. The native versions of Half-Life, Cities: Skylines and a bunch of other games refuse to start up at all for me for a few years now. Meanwhile I've been on the bleeding edge of Proton and I can count the number of breakages with my sizeable collection of working Windows games within the last couple of years on one hand. It's been a fantastic experience for me with Proton.
> Please use wine for more than a few months before commenting on how good it is.
I’ve used it for several years, and even to play Sims 2 from time to time, and while I’ve had issues the experience only gets better over time. It’s gotten to the point where I can confidently install any game on my Steam library and expect it to run. And be right most of the time.
Isn’t part of the original point not just that Wine is a perfect (dubious, imo) compatibility layer, but that distributing a native port is cumbersome on the Linux ecosystem?
There are plenty of examples of things being the other way around. For example, heavily modding Kerbal Space Program basically necessitated running Linux because that's the only platform that had a native 64-bit build that was even remotely stable (this has since been fixed, but for the longest time the 64-bit Windows version was horrendously broken) and therefore the only platform wherein a long mod list wouldn't rapidly blow through the 32-bit application RAM ceiling.
This wasn't a problem with the game itself. It's their anti-cheat malware that stopped working. On Windows these things are implemented as kernel modules designed to own our computers and take away our control so we can't cheat at video games.
It's always great when stuff like that breaks on Linux. I'm of the opinion it should be impossible for them to even implement this stuff on Linux but broken and ineffective is good too.
Coincidentally, Win32 is also the only stable API on Windows.
WinForms and WPF are still half-broken on .NET 5+, WinRT is out, basically any other desktop development framework since the Win32 era that is older than a couple of years is deprecated. Microsoft is famous for introducing new frameworks and deprecating them a few years later. Win32 is the only exception I can think of.
>"Coincidentally, Win32 is also the only stable API on Windows"
And this is what I use for my Windows apps. In the end I have self contained binaries that can run on anything starting from Vista and up to the most up to date OS.
Honest question, do you get HiDPI support if you write a raw win32 app nowadays? I haven’t developed for windows in over a decade so I’ve been out of the loop, but I also used to think of win32 being the only “true” API for windows apps, but it’s been so long that I’m not sure if that opinion has gotten stale.
As a sometimes windows user, I occasionally see apps that render absolutely tiny when the resolution is scaled to 200% on my 4k monitor, and I often wonder to myself whether those are raw win32 apps that are getting left behind and showing their age, or if something else is going on.
WinForms is just mostly a managed Win32 wrapper so unsurprisingly it’s very stable on the OS frameworks (4.X).
Building for .NET Framework using any APIs is extremely stable as development has mostly ceased. You pick a target framework depending on how old windows versions you must support.
Since MS decided to deprecate .NET Framework, making .NET 5+ the recommended basis for C# desktop development going forward. Yes you will still be able to run your old apps for many decades still, but you can never move to a newer version of the C# language and maintaining them is going to be an increasing pain as the years go by. I've already been down this road with VB6.
I recently experienced this in a critical situation. Long story short, something went very wrong during a big live event and I needed some tool to fix it.
I downloaded the 2 year old Linux binary, but it didn't run. I tried running it from an old Ubuntu Docker container, but there were dependencies missing and repos were long gone. Luckily it was open source, but compiling was taking ages. So in a case of "no way this works, but it doesn't hurt to try" I downloaded the Windows executable and ran it under Wine. Worked like a charm and everything was fixed before GCC was done compiling (I have a slow laptop).
I have personally used containers for this reason to set my gaming environment. If something breaks, all I need to do is to run older image and everything works.
"EAC" is Easy Anti Cheat, sold by Epic.[1] Not EarthCoin.
"EOS", in this context, is probably Epic Online Services, not one of the 103 other known uses of that acronym.[2]
Here's a list of the games using those features.[3]
So, many of these issues are for people building games with Epic's Unreal Engine on Linux. The last time I tried UE5, after the three hour build, it complained I had an NVidia driver it didn't like. I don't use UE5, but I've tried it out of curiosity. They do support Linux, but, as is typical, it's not the first platform they get working. Epic does have support forums, and if this is some Epic problem encountered by a developer, it can probably be fixed or worked round.
Wine is impressive. It's amazing that they can run full 3D games effectively. Mostly. Getting Wine bugs fixed is somewhat difficult. The Wine people want bugs reported against the current dev version. Wine isn't set up to support multiple installed versions of itself. There's a thing called PlayOnLinux which does Wine version switching, but the Wine team does not accept bug reports if that's in use.[4] So you may need a spare machine with the dev version of Wine for bug reproduction.
> Wine isn't set up to support multiple installed versions of itself.
huh? the official wine packages for ubuntu, debian, and i believe fedora provide separate wine-devel and wine-staging packages, which can be installed in parallel with each other and with distro packages. in fact, debian (and ubuntu) as well as arch provide separate wine and wine-staging packages as part of the distro itself, no separate repo required.
wine has no special support for relocated installations, but no more or less so than any large Unix program; you can install as many copies as you want, but they must be compiled with different --prefixes, and you cannot use different versions of wine simultaneously with the same WINEPREFIX.
Without getting into spoilers, I'll say that playing "Inscryption" really got me thinking about Docker's continued development could help consumers in the gaming industry.
I would love to see game being virtualized and isolated from the default userspace with passthrough for graphics and input to mitigate latency concerns. Abandonware could become a relic of the past! Being able to play what you buy on any device you have access to would be amazing.
I won't hold my breath, though. The industry pretty loudly rejected Nvidia's attempt to let us play games on their cloud without having to buy them all again. Todd needs the ability to sell us 15 versions of Skyrim to buy another house.
Doesn't the Steam Linux Runtime have a problem in the other direction though? Games are using libraries which are so old that they have bugs which are long since fixed or don't work properly in modern contexts. Apparently a lot of issues with Steam + Wayland comes from the ancient libraries in the Steam Linux Runtime from what I have been able to find out from googling issues I've experienced under Wayland.
Flatpak is basically Docker for linux, there are layers and everything. What you're saying should be possible if you make a AppImage/Flatpak out of the Steam Runtime+Proton(if needed)+Game, it should run anywhere with the right drivers.
Glibc is not Linux, and they have different backwards compatibility policies, but everyone should still read Linus Torvalds' classic 2012 email about ABI compability: https://lkml.org/lkml/2012/12/23/75 Teaser: It begins with "Mauro, SHUT THE FUCK UP!"
man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team
> Only an application that handles video should be using those controls, and as far as I know, pulseaudio is not a such application. Or are it trying to do world domination? So, on a first glance, this doesn't sound like a regression, but, instead, it looks tha pulseaudio/tumbleweed has some serious bugs and/or regressions.
Style and culture are certainly open for debate (I wouldn’t be as harsh as Linus), but correcting a maintainer who was behaving this way towards a large number of affected users was warranted. The kernel broke the API contract, a user reported it, and Mauro blamed the user for it.
When this comes up in conversation it is worthwhile remembering that Linux was built on the team of volunteers centered around Torvalds who was famous for not acting like a jerk. Really. The perception of him among hackers of being a good guy, you could work with, who acknowledged when linux had bugs, accepted patches and was pretty self-effacing is probably the thing that most made that project at that time take off to the stratosphere. Linus was a massive contrast to traditional bearded unix-assholery.
The nature of the work changes. The pressures change. The requirements change. We age. Also the times change too.
But yeah, it is possible to act like a jerk sometime without actually being a jerk in all things. It is also possible to be a lovely person who makes the odd mistake. Assholes can have good points. Life is nuanced.
Of the bajillion emails linus has sent to lkml how many of them can you find that you believe show evidence of being a jerk.
Compare to Theo de Raadt at OpenBSD who have also built a pretty useful thing with their community. Compare also to Larry Wall and Guido van Rossum.
None of us is above reasoned, productive criticism. Linus has done ok.
It’s not my personal style, but there’s plenty of high functioning teams in different domains headed by leaders who communicate like Torvalds. From Amy Klonuchar throwing binders (https://www.businessinsider.com/amy-klobuchar-throwing-binde...) to tons of high level folks in banking, law firms, etc.
Put differently, you can construct a high functioning team composed of certain personalities who can dish out and take this sort of communication style without burning out on it.
Speaking about consensus - there is another thread on the HN where people complain about Android 13 UI. I guess that was built with a healthy dose of consensus.
The point is - sometimes you need a jerk with a vision so that the thing you're building don't turn into amorphous blob.
I think if you take it out of context (which most people do), it looks a lot worse than it is.
A very senior guy who shouldve known better was trying, fairly persistently, to break a very simple rule everybody agreed to for a very bad reason. Linus told him to shut the fuck up.
I wouldnt say that Linus's reaction was anything to look up to but I wouldnt say that calling the tone police is at all justified either.
> by god, this is not how you build a consensus or a high functioning team
I beg to differ. Linus Torvalds is an example for us all, and I’d argue he has one of the most, if not the most, highly functioning open source teams in the world. The beauty in open source is you’re not stuck with the people you do not want to work with. You can “pick” your “boss”. Plus, different people communicate differently. Linus is abrasive. That is Okay because it works for him. What is not okay is having other people policing the tone in a conversation. Linus had this same conversation with Sarah Sharp, I’ll post the relevant quote below:
Because if you want me to "act professional", I can tell you that I'm not interested. I'm sitting in my home office wearign a bathrobe. The same way I'm not going to start wearing ties, I'm also not going to buy into the fake politeness, the lying, the office politics and backstabbing, the passive aggressiveness, and the buzzwords. Because THAT is what "acting professionally" results in: people resort to all kinds of really nasty things because they are forced to act out their normal urges in unnatural ways.
> man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team
True. I think Linux could've been pretty successful if someone with good management practices had been in charge from the start.
People often think that because jerks work at successful companies, you need to be a jerk to be successful. It’s more the other way around: a successful firm can carry many people who don’t add value, like parasites.
Glibc is GNU/Linux though and cannot be avoided when distributing packages to end users. If you want to interact with the userspace to do things get users, groups, netgroups, or DNS queries you have to use glibc functions or your users will hit weird edge cases like being able to resolve hosts in cURL but not your app.
Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely! But we're not really there. This is something the BSD's got absolutely right.
There are other libc implementations that work on Linux with various tradeoffs. Alpine famously uses musl libc for a lightweight libc for containers. These alternate libc implementations implement users/groups/network manipulation via well-known files like /etc/shadow, /etc/passwd, etc. You could fully statically link one of these into your app and just rely on the extremely stable kernel ABI if you're so interested.
> Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely!
But there are other "Linux"'s that are not GNU/Linux which was I think the point. Like Android, which doesn't use glibc, and doesn't have this mess. I think that was one of the things people used to complain about, that Android didn't use glibc, but since glibc seems to break ABI compatibility kinda on the regular that was probably the right call.
Solaris had separate libc, libnss, libsocket, and libpthread, I think?
Unlike many languages, Go doesn't use any libc on Linux. It uses the raw kernel API/ABI: system calls. Which is why a Go 1.18 binary is specified to be compatible with kernel version 2.6.32 (from December 2009) or later.
There are trade-offs here. But the application developer does have choices, they're just not no-cost.
If in distribution discussions Linux is name for the operating system and shell, downplaying the role of GNU, then it is also fair game to say here: Linux does not have a stable ABI because glibc changed.
The ABI of the Linux kernel seems reasonably stable. Somebody should write a new dynamic linker that lets you easily have multiple versions of libraries - even libc - around. Then its just like windows where you have to install some weird MSVC runtimes to play old games.
Or, GNU could just recognise their extremely central position in the GNU/Linux ecosystem and just not. break. everything. all. the. time.
It honestly really shouldn't be this hard, but GNU seems to have an intense aversion towards stability. Maybe moving to LLVM's replacements will be the long-term solution. GNU is certainly positioning itself to become more and more irrelevant with time, seemingly intentionally.
The issue is more subtle than that. The GNU and glibc people believe that they provide a very high level of backwards compatibility. They don't have an aversion towards stability and in fact, go far beyond most libraries by e.g. providing old versions of symbols.
The issue here is actually that app compatibility is something that's hard to do purely via theory. The GNU guys do compatibility on a per function level by looking at a change, and saying "this is a technical ABI break so we will version a symbol". This is not what it takes to keep apps working. What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something. And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.
Linux is really hurt here by the total lack of any unit testing or UI scripting standards. It'd be very hard to mass test software on the scale needed to find regressions. And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic. As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem. Actual users don't count for much. It's probably inevitable in any system that isn't driven by a profit motive.
GNU / glibc is _hardly_ the problem regarding ABI stability. TFA is about a library trying to parse executable files, so it's kind of a corner case; hardly representative.
The problem when you try to run a binary from the 90s on Linux is not glibc. Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.
Windows installs those MSVC runtimes via windows update for the last decade.
With Linux, ever revision of gcc has its own glibcxx, but distros don't keep those up to date. So that you'll find that code built with even an old compiler (like gcc10) isn't supported out of the box.
It's what the kernel strives for. They're remarkably consistent in their refrain of "we never break userspace."
I think it would be reasonable for glibc and similar to have similar goals, but I also don't run those projects and don't know what the competing interests are.
> I think it would be reasonable for glibc and similar to have similar goals
I don’t think userspace ever had this goal. The current consensus appears to be containers, as storage is cheap and maintaining backwards compatibility is expensive
Does that not miss the point of the above poster, this does not show that Linux have good binary compatibility, but that C is a very stable language. Would it run fine if you compiled it on a 27 year old compiler and then tried to run it on Linux is the question that should be asked if I am not mistaken.
YSK, this code will likely fail in weird ways on platforms with default unsigned char like ARM because it makes the classic mistake of assuming that the getc return value is compatible with char type despite getc returning int and not char. EOF is -1, and assigning a char on ARM changes to 255 so you'll read past the end of some buffers and then crash.
This is a long standing question and has nothing to do with Linux or windows. It's a design philosophy.
Yes the win32 abi is very stable. It's also a very inflexible piece of code and it drags it's 20 year old context around with it. If you want to add something to it you are going to work and work hard to ensure that your feature plays nicely with 20 year old code and if what you want to do is ambitious...say refactor it to improve it's performance...you are eternally fighting a large chunk of the codebase implementation that can't be changed.
Linux isn't about that and it never has been, it's about making the best monolithic kernel possible with high level Unix concepts that don't always have to have faithful implementations. The upside here is that you can build large and ambitious features that refactor large parts of how core components work if you like, but you might only compile those features against a somewhat recent glib.
This is a choice. You the developer can link whatever version you want. If you want to build in support for glib then just use features that only existed 10 years ago and you'll get similar compatibility to win32. If not then you are free to explore new features and performance you don't have to implement or track provided you consider it a sensible use cases that someone has to be running a somewhat recent version of glib.
The pros and cons are up for you to decide but it's not as simple as saying that windows is better because it's focus is backwards compatibility. There is an ocean of contexts hidden behind that seemingly magical backwards support...
According to Wikipedia, "Win32 is the 32-bit application programming interface (API) for versions of Windows from 95 onwards.".
Also from there "The initial design and planning of Windows 95 can be traced back to around March 1992" and it was released in '95. So arguably, the design decisions are closer to 30 years old than 20 :)
The main structure is from win16, although adding support for paging and process isolation was a pretty big improvement in win32. IMO its held up extremely well considering its 40 years old.
Surprisingly, that seems correct—a Flatpak bundle includes a glibc; though that only leaves me with more questions:
- On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space. In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
- On the other hand, a number of system services on linux-gnu depend on loading host libraries. Even if we ignore NSS (or exile it into a separate server process as it should have been in the first place), that leaves accelerated graphics: whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism). These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
- Modern toolkits basically live on accelerated graphics. Flatpak was created to distribute graphical applications built on modern toolkits.
There is no 'system' glibc. Linux doesn't care. The Linux kernel loads up the ELF interpreter specified in the ELF file based on the existing file namespace. If that ELF interpreter is the system one, then linux will likely remap it from existing page cache. If it's something else, linux will load it and then it will parse the remaining ELF sections. Linux kernel is incredibly stable ABI-wise. You can have any number of dynamic linkers happily co-existing on the machine. With Linux-based operating systems like NixOS, this is a normal day-to-day thing. The kernel doesn't care.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the system either.
No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space. User-space may use an older version of the API, but it will still work.
> whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism)
OpenGL is even more straightforwards because it is typically consumed as a dynamically loaded API, thus as long as the symbols match, it's fairly straightforwards to replace the system libGL.
> On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space.
Yes
> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.
But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.
> hether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism).
In Wayland, your app tells the server to render a bitmap. How you got that bitmap rendered is up to you.
The optimization is that you send dma-buf handle instead of a bitmap. This is a kernel construct, not userspace driver one. This allows also cross-API app/compositor (i.e. Vulkan compositor and OpenGL app, or vice-versa). This also means you can use different version of the userspace driver with compositor than inside the container and they share the kernel driver.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.
> but the previous point means that you can’t load them from the host system either.
You actually can -- bind mount that single binary into the container. You will use binary from the host, but load it using ld.so from inside container.
Does graphics on Linux work by loading the driver into your process? I assumed it works via writing a protocol to shared memory in case of Wayland, or over a socket (or some byzantine shared memory stuff that is only defined in the Xorg source) in case of X11.
From my experience, if you have the kernel headers and have all the required options compiled into your kernel, then you can go really far back and build a modern glibc and Gtk+ Stack, and use a modern application on an old system. If you do some tricks with Rpath, everything is self-contained. I think it should work the other way around, with old apps on a new kernel + display server, as well.
Static linking -- always the ready-to-go response for anything ABI-related. But does it really help? What use is a statically linked glibc+Xlib when your desktop no longer sets resolv.conf in the usual place and no longer speaks the X11 protocol (perhaps in the name of security) ?
I guess that kind of proves the point that there is no "stable", well, anything on Linux. Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.
/etc/sysctl.conf is a good example; on some systems it just works, on some systems you need to enable a systemd service thingy for it, but on some systems the systemd thingy doesn't read /etc/sysctl.conf and only /etc/sysctl.d.
So a simple "if you're running Linux, edit /etc/sysctl.conf to make these changes persist" has now become a much more complicated story. Writing a script to work on all Linux distros is much harder than it needs to be.
Even statically linked, the problems you just described are valid. The issue is x11 isn’t holding up and no one wants to change. Wayland was that promise of change but has taken 15+ years to develop (and still developing).
Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.
Deepin desktop and elementary are on the top of my list for elegance and ease of use. Apps and games need a solid ABI and this back and forth between gnome and kde doesn’t help.
With so many different wm’s and desktop environments, x11 is still the only method of getting a window with an opengl context in any kind of standard way. Wayland, X12, whatever it is, we need a universal ABI for window dressing for Linux to be taken seriously on the desktop.
Would you rather run a statically linked go or rust binary with the native crypto implementations of ssl or a dynamically linked OpenSSL that is easier to upgrade? (Honest question)
The glibc libs should be ELF clean. Namely, a pure and simple ELF64 dynamic exe, should be able to "libdl"/dynamically load any glibc lib. It is maybe fixed and possible in latest glibc.
The tricky part is the sysv ABI for TLS-ized system variables: __tls_get_addr(). For instance errno. It seems the pure and simple ELF64 dynamic exe would have to parse the ELF headers of the dynamically loaded shared libs in order to get the "offsets" of the variables. Never actually got into this for good though.
And in the game realm, you have c++ games (I have an extremely negative opinion about this language), and the static libstdc++ from gcc does not "libdl"/dynamically load what it requires from the glibc, and it seems even worse, namely it would depends on glibc internal symbols.
Then, if I got it right for TLS-ized variables, dlsym should do the trick. Namely, dlsym will return the address of a variable for the calling thread. Then this pointer can be cached the way the thread wants. On x86_64, it can "optimize" the dlsym calls by reusing the same address for all threads, namely 1 call is enough.
Now the real pain is this static libstdc++ not libdl-ing anything, or worse expecting internal glibc symbols (c++ ...)
Windows has switched from app-redistributed MSVC runtimes to OS-distributed "universal CRT" since Windows 10 (2015). Unlike MSVCRT, uCRT is ABI-stable.
I was going to make a joke about a.out support (and all the crazy stuff that enables, like old SCO binaries) but apparently a.out was removed in May as an option in the Linux kernel.
One way to achieve similar results on Linux might be for the Linux kernel team to start taking control of core libraries like x11 and wayland, and to extend the same "don't break userspace" philosophy to them also. That isn't going to happen, but I can dream!
My understanding is that the Linux devs like only having "Linux" only be the kernel; if they wanted to run things BSD-style (whole base system developed as a single unit) I assume they would have done that by now (it's been almost 30 years).
I'm not sure about taking over the entire GUI ecosystem but I certainly do want more functionality in the kernel instead of user space precisely because of how stable and universal the kernel is. I want system calls, not C libraries.
I used to disagree with this browser-as-OS mentality, but seeing as it's sandboxed and supports webGL, wasm, WebRTC, etc. I find it pretty convienient (if I'm forced to run zoom for example, I can just keep it in the browser). Just as long as website/application vendors test their stuff across different browsers.
At this point I'm pretty convinced that no-one at Microsoft ever did a better job in keeping people on Windows than what the maintainers of glibc are doing …
Well, the WSL team did a lot I think (including new Terminal). WSL, WSL2, WSLg, WSA - I almost never use full Linux VMs now, my pretty simple needs are covered with it.
The change that caused the break would be equivalent to the PE file format changing in an incompatible way on Windows, to give an idea of how severe it is.
Dynamically-linked userspace C is a smouldering trash heap of bad decisions.
You'd be better off distributing anything--anything--else than dynamically-linked binaries. Jar files, statically-linked binaries, C source code, Python source code, Brainfuck code ffs...
The "./configure and recompile from source" model of Linux is just too deeply entrenched. Pity.
Personal experience: Office 2021 and Office 97 do not paginate a DOC file created (by Microsoft employees) in Office 97 the same way, so the table of contents ends up different.
As a gamedev that tried shipping to linux, we really need some standardized minimized image to target with ancient glibc and such, and some guarantees that if it runs on the image that it runs on future linux versions.
Just target Flatpak. You got a standardised runtime, and glibc is included in the container. If it works on your machine, it'll work on my machine, since the only difference will be the kernel and Linus is pretty adamant in retaining compatibility at the syscall level.
Sidenote. I remember when Warcraft 3 ran better in Wine+Debian than in Windows.
Athlon II x2 CPU and Nvidia Geforce 6600 GT with a wooping 256MB of VRAM. That was one hot machine. Poor coolers.
Yeah Linus's "we don't break user space" is a joke.
Great, the kernel syscalls API is stable. Who cares, if you can't run a 7 year old binary because everything from vdso to libc to libstc++ to ld-linux.so is incompatible.
Good luck. No it's not just a matter of LD_LIBRARY_PATH and shipping a binary with a copy. That only helps with third party libs, and only if vdso and ld-linux is compatible.
My 28 year experience running Linux is that it's API (source code) unbroken, but absolutely not ABI.
Linus does provide a stable ABI with Linux, it's GNU who drops the ball and doesn't. You're criticizing Linus for something he has nothing to do with. What's the point in that?
Linus limited his scope to something that doesn't matter for users.
I think this is a valid criticism.
It's admirable to do the dishes, but the house is also on fire, so nobody will be able to enjoy the clean plates, so what's then even the point of doing the dishes?
In fact, in this analogy he could have saved the kitten instead of done the dishes.
Err, back from analogy land: ABI stability makes it harder to make things better, improving and replacing APIs. This is expected. But here we are in the worst of both worlds. Thanks to the kernel we are slowed down in improvements, and thanks to kinda-userspace (i.e. vdso & ld-linux), and userspace infra (libc, libstdc++, libm) we don't have ABI compatability either.
I wrote a game for my masters thesis in 2008. I wrote it in C++ and targeted Linux. Recently i tried to run it and not only binaries didn't work (that's a given), but even making it compile was a struggle cause I used some gui libraries that were abandoned and there was no version working with modern libc. It was easier to port the game to windows than to make it compile on Linux again...
In my opinion, Valve+distros should fork glibc and do a glibc distribution that focuses in absolute stability.
Didn't glibc devs said that distros have the freedom to choose what to maintain to not break their applications? This would be just a collaboration between the distros to maintain the stability.
I suppose that Win32 can be helpful if you want to make programs that run on both Windows and on Linux (and also ReactOS, I suppose), but might not work as well for programs with Linux specific capabilities.
(Also, I have problems to install Wine, due to package manager conflicts.)
There are other possibilities, such as .NET (although some comments in here says it is less stable, some says it works), which also can be used on Windows and on Linux. There is also HTML, which has too many of its own problems and I do not know how stable HTML really is, either. And then, also Java. Or, you can write a program for DOS, NES/Famicom, or something else, and can be emulated in many systems. (A program written for NES/Famicom might well run better on many systems than a native code does, especially if you do not do something too tricky in the code (in which case some implementations might not be compatible).) Of course also the different ways they have different advantages and disadvantages, with compatibility, efficiency, capability, and other features.
I laugh so hard... with tears! But, to be fair, Unreal 2004 still works almost perfectly on not too obsolete Ubuntu. Or did I have to do some glibc trickery?.. Can't remember for sure.
If web browsers had any kind of stable interface we wouldn't have: https://caniuse.com/, polyfils, vendor CSS prefixes and the rest of crutches. WASM isn't binary. But that's all irrelevant when we talk about ABI.
ABI is specifically binary interface between binary modules. For example: my hello_world binary and glibc or glibc and linux kernel or some binary and libsqlite3.
The kernel <> userspace API is stable, famously so. Dynamic linking to glibc is a terrible idea, statically link your binaries against musl and they'll still work in 100 years.
oh, no, not again: kids working for big tech constantly, but randomly, deprecating, removing and breaking apis/abis/features in the kernel/libraries/everywhere. I honestly belive that all relationships between big tech companies and opensource are toxic and follow the microsoft principle of the embrace, extend, and extinguish.
This is by design, and everybody should be aware of that. I don't know about glibc, but as far as the kernel is concerned, Linus has never guaranteed ABI stability. API, on the other hand, is extremely stable, and there are good reasons for that.
In Windows, software is normally distributed in a binary form, so having ABI compatibility is a must.
Uh, the kernel ABI is extremely stable. You could take a binary that's statically compiled in the 90s and run it on the latest release of the kernel. "Don't break userspace" is Linus's whole schtick, and he's taking about in terms of ABI when he says that.
This is about the ABIs of userspace "system-level" libraries, glibc in particular.
Many moons ago, one of the things I did was to port the Windows version of Google Earth to both Mac and Linux. I did the mac first, which was onerous, because of all the work involved in abstracting away system specific API's, but once that was done, I thought Linux would be a lesser task, and we hired a great linux guy to help with that.
Turns out, while getting it running on linux was totally doable, getting it distributed was a completely different story. Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base? The first approach is a lot of work, and suffers from breakages from time to time, and you alienate users not on your list of supported distros. The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.
End result; management canned the Linux version because too much ongoing support work was required, and no matter what you did, you got hate mail from Gentoo users.
> Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that?
I build on an distro with an old enough glibc following this table: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa... (right now rockylinux:8 which is equivalent to centos:8 and good enough for debian stable and anything more recent than that ; last year I was still on centos:7), use dlopen as much as possible instead of "normal" linking and then it works on the more recent ones without issues.
I worked on a product that shipped as a closed source binary .so (across four OSes and two architectures) for almost seven years, and that's exactly what we did too — build on the oldest libc and kernel any of your supported distros (or OS versions) support, statically link as much as you can, and be defensive about _any_ runtime dependencies you have.
That's the trick. AppImage has a pretty good list of other best practices too: https://docs.appimage.org/reference/best-practices.html (applies even if you don't use AppImages).
If what you're doing works for you, great, but in case it stops working at some point (or if for some reason you need to build on a current-gen distro version), you could also consider using this:
https://github.com/wheybags/glibc_version_header
It's a set of autogenerated headers that use symbol aliasing to allow you to build against your current version of glibc, but link to the proper older versioned symbols such that it will run on whatever oldest version of glibc you select.
1 reply →
> We need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base?
When I worked on mod_pagespeed we went with the first approach, building an RPM and a DEB. As long as we built on the oldest still-supported CentOS and Ubuntu LTS, 32-bit and 64-bit, we found that our packages worked reliably on all RPM- and DEB-based distros. Building four packages was annoying, but we automated it.
(We also distributed source, so it may be that it didn't work for some people and they instead built from source. But people would usually ask questions on https://groups.google.com/g/mod-pagespeed-discuss before resorting to that I don't think I saw this issue.)
FWIW, these days Valve tries to solve same problems with their steam runtime[0][1]. Still doesn't seem easy, but looks like almost workable solution.
[0] https://github.com/ValveSoftware/steam-runtime
[1] https://archive.fosdem.org/2020/schedule/event/containers_st...
A multi billion dollar company with massive investments in Linux making an almost workable solution means everyone else is screwed
26 replies →
Was static linking not enough?
I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.
It is possible, though, that whatever you statically link doesn't work with the running kernel, of course. And there are a lot of variants out there; every distribution has their own patch cadence. (A past example of this was the Go memory corruption issue from 1.13 on certain kernels. 1.14 added various checks for distribution + kernel version to warn people of the issue, and still got it wrong in several cases. Live on the bleeding edge, die on the bleeding edge.)
> I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.
Could it work with gcompat? Alpine has it in the community repo.
https://git.adelielinux.org/adelie/gcompat
2 replies →
Static linking against MUSL only makes sense for relatively simple command line tools. As soon as 'system DLLs' like X11 or GL are involved it's back to 'DLLs all the way down'.
> Was static linking not enough?
It is a GPL violation when non-GPL software does it.
1 reply →
How do Firefox and Blender do it? They just provide compressed archives, which you uncompress into a folder and run the binary, no problem. I myself once had to write a small CLI program in Rust, where I statically linked musl. I know, can't compare that with OpenGL stuff, but Firefox and Blender do use OpenGL (and perhaps even Vulkan these days?).
Firefox maintains an Flatpak package on Flathub. Flatpak uses runtimes to provide a base layer of provided libraries that are the same regardless which distro you use.
https://beta.flathub.org/apps/details/org.mozilla.firefox
with difficulty, and not that well. for example, firefox binaries require gtk built with support for X, despite only actually using wayland at runtime if configured. the reason why people generally don't complain about it is because if you have this sort of weird configuration, you can usually compile firefox yourself, or have it compiled by your distro. with binary-only releases, all complaints (IMO justifiably) go to the proprietary software vendors.
1 reply →
Firefox has a binary they ship in a zip which is broken but they also officially ship a Flatpak which is excellent.
2 replies →
> The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.
That was a shame. There was a lot of hope for LSB, but in the end the execution flopped. I don't know if it would have been possible to make it succeed.
So this sort of bleeds in to the Init Wars, but there's a lot of back and forth about whether LSB flopped or was deliberately strangled by a particular player in the Linux ecosystem.
I guess this is another instance of Windows and Mac OS are operating systems. "Linux" is a kernel, powering multiple different operating systems.
It is important to note that this comment is from a time before snaps, flatpaks and AppImages.
Yesterday I tried to install an Inkscape plugin I have been using for a long time. I upgraded my system and the plug-in went away. So I download the zip, open Inkscape, open the plugin manager, go to add the new plugin by file manager… and the opened file manager is unable to see my home directory (weird because when opening Inkscape files it can see home, but when installing extensions it cannot). It took some time to figure out how to get the downloaded file in to a folder the Inkscape snap could see. Somehow though I still could not get it installed. Eventually I uninstalled the snap and installed the .deb version. That worked!
Recently I downloaded an AppImage for digiKam. It immediately crashed when trying to open it, because I believe glibc did not work with my system version (a recent stable Ubuntu).
Last week I needed to install a gnome extension. The standard and seemingly only supported way of doing this is to open a web page, install a Firefox extension, and then click a button on the web page to install it. The page told me to install the Firefox extension and that worked properly. Then it said Firefox didn’t have access to the necessary parts of my file system. It turns out FF is a snap and file system access is limited, so the official way of installing the gnome extension doesn’t work. I ended up having to download and install chrome and install the gnome extension from there.
These new “solutions” have their own problems.
3 replies →
If snaps or flatpaks are the only future for Linux desktop software distribution then I'm switching to windows+wsl
2 replies →
Yeah. Now people just statically link the dynamic libraries.
>The first approach is a lot of work, and suffers from breakages from time to time
Are there any distros that treat their public APIs as an unbreakable contract with developers like what MS does?
RedHat claims or at least claimed that for EL. I think it’s limited to within minor releases though, with majors being API.
That’s fine if you’re OK relying on their packages and 3rd party “enterprise” software that’s “certified” for the release. No one in their right mind would run RHEL on a desktop.
The most annoying to me was that RHEL6 was still under support and had an ancient kernel that excluded running Go, GRaalVM, etc. static binaries. No epoll() IIRC.
Often times you find yourself having to pull more and more libraries into a build. It all starts with wanting a current Python and before you know it you’re bringing in your own OpenSSL.
And they have no problem changing their system management software in patch releases. They’ve changed priority of config files too many times. But that’s another rant for another day.
This is a place where I wish some BSD won out. With all the chunks of the base userspace + kernel each moving in their own direction it’s impossible to get out of this place. Then add in every permutation of those pieces from the distros.
Multiple kernel versions * multiple libc implementations * multiple inits * …
I’d never try to make binary-only software for Linux. Dealing with packaging OSS is bad enough.
8 replies →
No, no one does. It's a lot more work to maintain all public APIs and their behavior for all time; it can often prevent even fixing bugs, if some apps come to depend on the buggy behavior. Microsoft would occasionally add API parameters/ options to let clients opt-in to bug fixes, or auto-detect known-popular apps and apply special bug-fix behaviors just for them.
Even Apple doesn't make that level of "unbreakable contract" commitment. Apple will announce deprecations of APIs with two or three years of opportunity to fix them. If apps don't upgrade within the timeframe, they just stop working in newer versions of macOS.
2 replies →
Are these Linux app distribution problems solved by using Flatpak?
Most of them are, yes. AppImage also solves this, but doesn't have as robust of an update/package management system
1 reply →
Yeah, their Linux guy obviously didn't know what he was doing.
2 replies →
In that context, cannot the issue be sidestepped entirely by statically linking[1] everything you need?
AFAIK the LGPL license even allows you to statically link glibc, as long as you provide a way for your user to load their own version of the libs by themself if that's what they want.
[1]: (or dlopening libs you bundle with your executable)
Ricers wanna rice! Can we spin the globe so fast that it breaks apart?
Would the hate mails also have included 'internal' users, say from Chromium-OS?
Another approach might be a hybrid, with a closed-source binary "core", and open-source code and linkage glue between that and OS/other libraries. And an open-source project with one-or-few officially-supported distributions, but welcoming forks or community support of others.
A large surface area app (like Google Earth?) could be less than ideal for this. But I've seen a closed-source library, already developed internally on linux, with a small api, and potential for community, where more open availability quagmired on this seemingly false choice of "which distributions would we support?"
> Due to IP reasons, this can't ship as code, so we need to ship binaries.
Good, it should be as difficult as possible, if not illegal, to ship proprietary crap to Linux. The operating system was always intended to be Free Software. If I cannot audit the code, it’s spyware crap and doesn’t belong in the Linux world anyway.
Could you have used something like this:
https://justine.lol/cosmopolitan/index.html
I'd assume not without violating causality?
2 replies →
Loki managed to release binaries for Linux long before Google Earth was a thing. I'm not going to claim that things are/were perfect but you never needed to support each distro individually: Just ship your damned dependencies except for base system stuff like libc and OpenGL which does provide pretty good backwards compatibility so you only need to target the oldest version you want to support and it will work on newer ones as well.
And then you have many devs complaining as to why MS doesn’t want to invest time on MAUI for Linux. This is why.
One possible idea: https://appimage.org
Wait. Google Earth has always been available for Linux? https://www.google.com/earth/versions/
They probably mean the old desktop one that has been re-branded to "Google Earth Pro". The UI looks a decade old but it's still useful for doing more advanced things like taking measurements.
2 replies →
Flatpak solved this issue. You use a "runtime" as the base layer, similar to the initial `FROM` in Dockerfiles. Flatpak runs then the app in a containerized environment.
Agree. Had a few games on Steam crap out with the native version, forced it to use proton with the Windows version, everything worked flawlessly. Developers natively porting to linux seem to be wasting their time.
Funnily enough with wine we kinda recreated the model of modern windows, where Win32 is a personality on top of the NTAPI which then interfaces with the kernel. Wine sits between the application and the zoo of libraries including libc that change all the time.
> Developers natively porting to linux seem to be wasting their time.
Factorio runs so much better than any of this emulationware, it's one of the reasons I love the game so much and gifted licenses for friends using Windows.
Some software claims to support Linux but uses some tricks to avoid recompiling and it's always noticeable, either as lag, as UI quirks, or some features plainly don't work because all the testers were windows users.
Emulating as a quick workaround is all fair game but don't ship that as a Linux release. I appreciate native software (so long as it's not java), and I'm also interested in buying your game if you advertise it as compatible with WINE (then I'm confident that it'll work okay and you're interested in fixing bugs under emulation), just don't mislead and pretend and then use a compatibility layer.
In case you weren't aware Wine is not an emulator, it is a compatibility layer.
The whole point of wine is to take a native Windows app, only compiled for Windows and translate its Windows calls to Linux calls.
50 replies →
Have you actually tried to run the Windows version of Factorio through Proton and experienced slowdowns? In my experience, WINE doesn't result in a noticeable slowdown compared to running on Windows natively (without "emulationware" as you call it), unless there are issues related to graphics API translation which is a separate topic.
9 replies →
Wine Is Not an Emulator
9 replies →
I've been using wine and glibc for almost 20 years now and wine is waaaay more unstable than glibc.
Wine is nice until you try to play with sims3 after updating wine. Every new release of wine breaks it.
Please use wine for more than a few months before commenting on how good it is.
It's normal that every new release some game stops working. Which is why steam offers the option to choose which proton version to use. If they all worked great one could just stick to the last.
As someone who's been gaming on Proton or Lutris + Raw Wine, I'm not sure I agree. I regularly update Proton or Wine without seeing major issues or regressions. It certainly happens sometimes, but I'm not sure it's any worse of a "version binding" problem than a lot of stuff in Linux is. Sure, sometimes you have to specifically use an older version, but getting "native" linux games to work on different GPU architectures or distros is a mess as well, and often involves pinning drivers or dependencies. I've had games not run on my Fedora laptop that run fine on my Ubuntu desktop, but for the most part, Wine or Proton installed things work the same across Linux installs, and often with better performance somehow.
7 replies →
I used to say that Wine makes Linux tolerable, but after using it for several years I've concluded that Wine makes Windows tolerable.
Absolute opposite experience for me. The native versions of Half-Life, Cities: Skylines and a bunch of other games refuse to start up at all for me for a few years now. Meanwhile I've been on the bleeding edge of Proton and I can count the number of breakages with my sizeable collection of working Windows games within the last couple of years on one hand. It's been a fantastic experience for me with Proton.
4 replies →
> Please use wine for more than a few months before commenting on how good it is.
I’ve used it for several years, and even to play Sims 2 from time to time, and while I’ve had issues the experience only gets better over time. It’s gotten to the point where I can confidently install any game on my Steam library and expect it to run. And be right most of the time.
4 replies →
> Every new release of wine breaks it.
Is there any way to easily choose which Wine version you use for compatiblity? Multiple Wine versions without VMs etc?
3 replies →
> Developers natively porting to linux seem to be wasting their time.
Initial port of Valve's source engine ran 20% faster without any special optimizations back in the day. So I don't see why the effort is wasted.
Isn’t part of the original point not just that Wine is a perfect (dubious, imo) compatibility layer, but that distributing a native port is cumbersome on the Linux ecosystem?
26 replies →
FWIW, targeting proton is likely the best platform target for future Windows compatibility too.
There are plenty of examples of things being the other way around. For example, heavily modding Kerbal Space Program basically necessitated running Linux because that's the only platform that had a native 64-bit build that was even remotely stable (this has since been fixed, but for the longest time the 64-bit Windows version was horrendously broken) and therefore the only platform wherein a long mod list wouldn't rapidly blow through the 32-bit application RAM ceiling.
This wasn't a problem with the game itself. It's their anti-cheat malware that stopped working. On Windows these things are implemented as kernel modules designed to own our computers and take away our control so we can't cheat at video games.
It's always great when stuff like that breaks on Linux. I'm of the opinion it should be impossible for them to even implement this stuff on Linux but broken and ineffective is good too.
Coincidentally, Win32 is also the only stable API on Windows.
WinForms and WPF are still half-broken on .NET 5+, WinRT is out, basically any other desktop development framework since the Win32 era that is older than a couple of years is deprecated. Microsoft is famous for introducing new frameworks and deprecating them a few years later. Win32 is the only exception I can think of.
I was gonna say, I think Win32 is the only stable API full stop. Everything else is churn city.
Yeah. And MFC on top makes it a bit more chewable :3
.NET Framework is still there by default out of the box, and still runs WinForms and WPF like it always did.
Which version of it? 1.0?
5 replies →
>"Coincidentally, Win32 is also the only stable API on Windows"
And this is what I use for my Windows apps. In the end I have self contained binaries that can run on anything starting from Vista and up to the most up to date OS.
Honest question, do you get HiDPI support if you write a raw win32 app nowadays? I haven’t developed for windows in over a decade so I’ve been out of the loop, but I also used to think of win32 being the only “true” API for windows apps, but it’s been so long that I’m not sure if that opinion has gotten stale.
As a sometimes windows user, I occasionally see apps that render absolutely tiny when the resolution is scaled to 200% on my 4k monitor, and I often wonder to myself whether those are raw win32 apps that are getting left behind and showing their age, or if something else is going on.
3 replies →
WinForms is just mostly a managed Win32 wrapper so unsurprisingly it’s very stable on the OS frameworks (4.X).
Building for .NET Framework using any APIs is extremely stable as development has mostly ceased. You pick a target framework depending on how old windows versions you must support.
WinRT lives on as WinAppSDK.
Metro lives on as UWP lives on as WinRT lives on as Project Reunion lives on as WinAppSDK.
Exactly the point the OP was making. Win32 is stable.
1 reply →
The names aren't getting better either ...
Since when is .NET 5+ part of Windows?
Since MS decided to deprecate .NET Framework, making .NET 5+ the recommended basis for C# desktop development going forward. Yes you will still be able to run your old apps for many decades still, but you can never move to a newer version of the C# language and maintaining them is going to be an increasing pain as the years go by. I've already been down this road with VB6.
And .NET 4.8 is still installed by default on Windows 11 and will presumably happily run your WPF app if you target it.
2 replies →
I recently experienced this in a critical situation. Long story short, something went very wrong during a big live event and I needed some tool to fix it.
I downloaded the 2 year old Linux binary, but it didn't run. I tried running it from an old Ubuntu Docker container, but there were dependencies missing and repos were long gone. Luckily it was open source, but compiling was taking ages. So in a case of "no way this works, but it doesn't hurt to try" I downloaded the Windows executable and ran it under Wine. Worked like a charm and everything was fixed before GCC was done compiling (I have a slow laptop).
I have personally used containers for this reason to set my gaming environment. If something breaks, all I need to do is to run older image and everything works.
Notes:
"EAC" is Easy Anti Cheat, sold by Epic.[1] Not EarthCoin.
"EOS", in this context, is probably Epic Online Services, not one of the 103 other known uses of that acronym.[2]
Here's a list of the games using those features.[3]
So, many of these issues are for people building games with Epic's Unreal Engine on Linux. The last time I tried UE5, after the three hour build, it complained I had an NVidia driver it didn't like. I don't use UE5, but I've tried it out of curiosity. They do support Linux, but, as is typical, it's not the first platform they get working. Epic does have support forums, and if this is some Epic problem encountered by a developer, it can probably be fixed or worked round.
Wine is impressive. It's amazing that they can run full 3D games effectively. Mostly. Getting Wine bugs fixed is somewhat difficult. The Wine people want bugs reported against the current dev version. Wine isn't set up to support multiple installed versions of itself. There's a thing called PlayOnLinux which does Wine version switching, but the Wine team does not accept bug reports if that's in use.[4] So you may need a spare machine with the dev version of Wine for bug reproduction.
[1] https://www.easy.ac/en-us/
[2] https://acronyms.thefreedictionary.com/EOS
[3] https://steamcommunity.com/groups/EpicGamesSucks/discussions...
[4] https://wiki.winehq.org/Bugs
> Wine isn't set up to support multiple installed versions of itself.
huh? the official wine packages for ubuntu, debian, and i believe fedora provide separate wine-devel and wine-staging packages, which can be installed in parallel with each other and with distro packages. in fact, debian (and ubuntu) as well as arch provide separate wine and wine-staging packages as part of the distro itself, no separate repo required.
wine has no special support for relocated installations, but no more or less so than any large Unix program; you can install as many copies as you want, but they must be compiled with different --prefixes, and you cannot use different versions of wine simultaneously with the same WINEPREFIX.
Oh, that's good to know. Thanks.
Related:
Win32 is the stable Linux userland ABI (and the consequences): https://news.ycombinator.com/item?id=30490570
336 points, 242 comments, 5 months ago
Without getting into spoilers, I'll say that playing "Inscryption" really got me thinking about Docker's continued development could help consumers in the gaming industry.
I would love to see game being virtualized and isolated from the default userspace with passthrough for graphics and input to mitigate latency concerns. Abandonware could become a relic of the past! Being able to play what you buy on any device you have access to would be amazing.
I won't hold my breath, though. The industry pretty loudly rejected Nvidia's attempt to let us play games on their cloud without having to buy them all again. Todd needs the ability to sell us 15 versions of Skyrim to buy another house.
For games on Steam there's the Steam Linux Runtime which can run games on Linux in a specialized container to isolate them from these sort of bugs.
There's also a variant of this container that contains a forked version of Wine for running Windows games as well.
Doesn't the Steam Linux Runtime have a problem in the other direction though? Games are using libraries which are so old that they have bugs which are long since fixed or don't work properly in modern contexts. Apparently a lot of issues with Steam + Wayland comes from the ancient libraries in the Steam Linux Runtime from what I have been able to find out from googling issues I've experienced under Wayland.
> Abandonware could become a relic of the past!
That would eat into some business models though, like Nintendo's quadruple-dipping with its virtual consoles
Good. All those games should be in the public domain anyway. It's been 30-40 years, Nintendo has been more than adequately compensated.
3 replies →
Flatpak is basically Docker for linux, there are layers and everything. What you're saying should be possible if you make a AppImage/Flatpak out of the Steam Runtime+Proton(if needed)+Game, it should run anywhere with the right drivers.
Good luck once wayland starts to actually be used to run any game from before wayland.
3 replies →
Glibc is not Linux, and they have different backwards compatibility policies, but everyone should still read Linus Torvalds' classic 2012 email about ABI compability: https://lkml.org/lkml/2012/12/23/75 Teaser: It begins with "Mauro, SHUT THE FUCK UP!"
man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team
The context, from Mauro’s previous message:
> Only an application that handles video should be using those controls, and as far as I know, pulseaudio is not a such application. Or are it trying to do world domination? So, on a first glance, this doesn't sound like a regression, but, instead, it looks tha pulseaudio/tumbleweed has some serious bugs and/or regressions.
Style and culture are certainly open for debate (I wouldn’t be as harsh as Linus), but correcting a maintainer who was behaving this way towards a large number of affected users was warranted. The kernel broke the API contract, a user reported it, and Mauro blamed the user for it.
1 reply →
When this comes up in conversation it is worthwhile remembering that Linux was built on the team of volunteers centered around Torvalds who was famous for not acting like a jerk. Really. The perception of him among hackers of being a good guy, you could work with, who acknowledged when linux had bugs, accepted patches and was pretty self-effacing is probably the thing that most made that project at that time take off to the stratosphere. Linus was a massive contrast to traditional bearded unix-assholery.
The nature of the work changes. The pressures change. The requirements change. We age. Also the times change too.
But yeah, it is possible to act like a jerk sometime without actually being a jerk in all things. It is also possible to be a lovely person who makes the odd mistake. Assholes can have good points. Life is nuanced.
Of the bajillion emails linus has sent to lkml how many of them can you find that you believe show evidence of being a jerk.
Compare to Theo de Raadt at OpenBSD who have also built a pretty useful thing with their community. Compare also to Larry Wall and Guido van Rossum.
None of us is above reasoned, productive criticism. Linus has done ok.
It’s not my personal style, but there’s plenty of high functioning teams in different domains headed by leaders who communicate like Torvalds. From Amy Klonuchar throwing binders (https://www.businessinsider.com/amy-klobuchar-throwing-binde...) to tons of high level folks in banking, law firms, etc.
Put differently, you can construct a high functioning team composed of certain personalities who can dish out and take this sort of communication style without burning out on it.
31 replies →
Speaking about consensus - there is another thread on the HN where people complain about Android 13 UI. I guess that was built with a healthy dose of consensus.
The point is - sometimes you need a jerk with a vision so that the thing you're building don't turn into amorphous blob.
5 replies →
>by god, this is not how you build consensus or a high functioning team
Says you, while criticizing Linus Torvalds from 2012. Who has a better track record of building consensus and high functioning teams ?
3 replies →
I think if you take it out of context (which most people do), it looks a lot worse than it is.
A very senior guy who shouldve known better was trying, fairly persistently, to break a very simple rule everybody agreed to for a very bad reason. Linus told him to shut the fuck up.
I wouldnt say that Linus's reaction was anything to look up to but I wouldnt say that calling the tone police is at all justified either.
1 reply →
> by god, this is not how you build a consensus or a high functioning team
I beg to differ. Linus Torvalds is an example for us all, and I’d argue he has one of the most, if not the most, highly functioning open source teams in the world. The beauty in open source is you’re not stuck with the people you do not want to work with. You can “pick” your “boss”. Plus, different people communicate differently. Linus is abrasive. That is Okay because it works for him. What is not okay is having other people policing the tone in a conversation. Linus had this same conversation with Sarah Sharp, I’ll post the relevant quote below:
Because if you want me to "act professional", I can tell you that I'm not interested. I'm sitting in my home office wearign a bathrobe. The same way I'm not going to start wearing ties, I'm also not going to buy into the fake politeness, the lying, the office politics and backstabbing, the passive aggressiveness, and the buzzwords. Because THAT is what "acting professionally" results in: people resort to all kinds of really nasty things because they are forced to act out their normal urges in unnatural ways.
> man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team
True. I think Linux could've been pretty successful if someone with good management practices had been in charge from the start.
> by god, this is not how you build consensus or a high functioning team
Linus has been pretty successful so far. There's not just "one style" that works.
maybe it is how you build the world's most popular operating system?
because he did
People often think that because jerks work at successful companies, you need to be a jerk to be successful. It’s more the other way around: a successful firm can carry many people who don’t add value, like parasites.
Guarantee you Linus wasn’t this bad in the 90s.
4 replies →
naw man, let the old git be. he is a lovely old man. one day we wont have people like this. he gave more than he took.
He's a product of a different time. Personally, I love his attitude -- wouldn't want to work under him though.
Glibc is GNU/Linux though and cannot be avoided when distributing packages to end users. If you want to interact with the userspace to do things get users, groups, netgroups, or DNS queries you have to use glibc functions or your users will hit weird edge cases like being able to resolve hosts in cURL but not your app.
Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely! But we're not really there. This is something the BSD's got absolutely right.
There are other libc implementations that work on Linux with various tradeoffs. Alpine famously uses musl libc for a lightweight libc for containers. These alternate libc implementations implement users/groups/network manipulation via well-known files like /etc/shadow, /etc/passwd, etc. You could fully statically link one of these into your app and just rely on the extremely stable kernel ABI if you're so interested.
2 replies →
> Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely!
I worked on this a few years ago: liblinux.
https://news.ycombinator.com/item?id=28283632
But there are other "Linux"'s that are not GNU/Linux which was I think the point. Like Android, which doesn't use glibc, and doesn't have this mess. I think that was one of the things people used to complain about, that Android didn't use glibc, but since glibc seems to break ABI compatibility kinda on the regular that was probably the right call.
Solaris had separate libc, libnss, libsocket, and libpthread, I think?
Unlike many languages, Go doesn't use any libc on Linux. It uses the raw kernel API/ABI: system calls. Which is why a Go 1.18 binary is specified to be compatible with kernel version 2.6.32 (from December 2009) or later.
There are trade-offs here. But the application developer does have choices, they're just not no-cost.
If in distribution discussions Linux is name for the operating system and shell, downplaying the role of GNU, then it is also fair game to say here: Linux does not have a stable ABI because glibc changed.
Really appreciate your stuff Bjorn, this link always brings a smile to my (too young to be cynical) face.
Thanks!
I am no longer able to see this comment. It says the message body was removed.
Anyone else? I'll have to assume this is the history of how we built great things being deleted in realtime.
I think lkml.org has issues with lots of traffic: https://lore.kernel.org/lkml/CA+55aFy98A+LJK4+GWMcbzaa1zsPBR...
The ABI of the Linux kernel seems reasonably stable. Somebody should write a new dynamic linker that lets you easily have multiple versions of libraries - even libc - around. Then its just like windows where you have to install some weird MSVC runtimes to play old games.
Or, GNU could just recognise their extremely central position in the GNU/Linux ecosystem and just not. break. everything. all. the. time.
It honestly really shouldn't be this hard, but GNU seems to have an intense aversion towards stability. Maybe moving to LLVM's replacements will be the long-term solution. GNU is certainly positioning itself to become more and more irrelevant with time, seemingly intentionally.
The issue is more subtle than that. The GNU and glibc people believe that they provide a very high level of backwards compatibility. They don't have an aversion towards stability and in fact, go far beyond most libraries by e.g. providing old versions of symbols.
The issue here is actually that app compatibility is something that's hard to do purely via theory. The GNU guys do compatibility on a per function level by looking at a change, and saying "this is a technical ABI break so we will version a symbol". This is not what it takes to keep apps working. What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something. And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.
Linux is really hurt here by the total lack of any unit testing or UI scripting standards. It'd be very hard to mass test software on the scale needed to find regressions. And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic. As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem. Actual users don't count for much. It's probably inevitable in any system that isn't driven by a profit motive.
43 replies →
GNU / glibc is _hardly_ the problem regarding ABI stability. TFA is about a library trying to parse executable files, so it's kind of a corner case; hardly representative.
The problem when you try to run a binary from the 90s on Linux is not glibc. Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.
3 replies →
Windows installs those MSVC runtimes via windows update for the last decade.
With Linux, ever revision of gcc has its own glibcxx, but distros don't keep those up to date. So that you'll find that code built with even an old compiler (like gcc10) isn't supported out of the box.
I read "old compiler" and thought you meant something like GCC 4.8.5, not something released in 2020!
The Linux kernel ABIs are explicitly documented as stable. If they change and user space programs break, it's a bug in the kernel.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
Someone should invent a command to change root… we should call it chroot!
The article seems to document ways in which it isn't. I have no idea personally, are these just not really practical problems?
The article is talking about userland, not the kernel's ABI.
Sounds like you want Flatpak, Docker or Snap :)
Just use nix.
I can run a Windows 95 app on Windows 10 and it has a reasonable chance of success.
Should Linux (userland) strive for that? Or is Year of the Linux Desktop only covers things compiled in the last 10 years?
It's what the kernel strives for. They're remarkably consistent in their refrain of "we never break userspace."
I think it would be reasonable for glibc and similar to have similar goals, but I also don't run those projects and don't know what the competing interests are.
> I think it would be reasonable for glibc and similar to have similar goals
I don’t think userspace ever had this goal. The current consensus appears to be containers, as storage is cheap and maintaining backwards compatibility is expensive
4 replies →
> Should Linux (userland) strive for that?
The linux "userland" includes thousands of independent projects. You'll need to be more specific.
> Or is Year of the Linux Desktop only covers things compiled in the last 10 years?
If you want ABI compatibility then you'll have to pay, it's that simple. Expecting anything more is flat out unreasonable.
> The linux "userland" includes thousands of independent projects. You'll need to be more specific.
I think it's pretty clear from the context.
The core GNU userland: glibc, coreutils, gcc, etc.
Just try changing your hosts or nameservers across different versions of Ubuntu Server.
The fragmentation is such a mess even between 1.x major versions. Their own documentation is broken or non existant.
Here is some game from '93. Compile it yourself (with some trivial changes).
https://github.com/DikuMUDOmnibus/ROM
Trivial !
But if you still have some obiections then let's wait ~27 years and then talk about games developed on Linux / *nix.
Does that not miss the point of the above poster, this does not show that Linux have good binary compatibility, but that C is a very stable language. Would it run fine if you compiled it on a 27 year old compiler and then tried to run it on Linux is the question that should be asked if I am not mistaken.
4 replies →
YSK, this code will likely fail in weird ways on platforms with default unsigned char like ARM because it makes the classic mistake of assuming that the getc return value is compatible with char type despite getc returning int and not char. EOF is -1, and assigning a char on ARM changes to 255 so you'll read past the end of some buffers and then crash.
1 reply →
This is a long standing question and has nothing to do with Linux or windows. It's a design philosophy.
Yes the win32 abi is very stable. It's also a very inflexible piece of code and it drags it's 20 year old context around with it. If you want to add something to it you are going to work and work hard to ensure that your feature plays nicely with 20 year old code and if what you want to do is ambitious...say refactor it to improve it's performance...you are eternally fighting a large chunk of the codebase implementation that can't be changed.
Linux isn't about that and it never has been, it's about making the best monolithic kernel possible with high level Unix concepts that don't always have to have faithful implementations. The upside here is that you can build large and ambitious features that refactor large parts of how core components work if you like, but you might only compile those features against a somewhat recent glib.
This is a choice. You the developer can link whatever version you want. If you want to build in support for glib then just use features that only existed 10 years ago and you'll get similar compatibility to win32. If not then you are free to explore new features and performance you don't have to implement or track provided you consider it a sensible use cases that someone has to be running a somewhat recent version of glib.
The pros and cons are up for you to decide but it's not as simple as saying that windows is better because it's focus is backwards compatibility. There is an ocean of contexts hidden behind that seemingly magical backwards support...
A design philosophy of not being able to run old software?
A design philosophy of always having to update your system?
A design philosophy of being unable to distribute compiled software for all Linux distros?
Most Win32 applications from Windows 95 work just fine in Windows 11 in 2022. That's proper design.
According to Wikipedia, "Win32 is the 32-bit application programming interface (API) for versions of Windows from 95 onwards.".
Also from there "The initial design and planning of Windows 95 can be traced back to around March 1992" and it was released in '95. So arguably, the design decisions are closer to 30 years old than 20 :)
The main structure is from win16, although adding support for paging and process isolation was a pretty big improvement in win32. IMO its held up extremely well considering its 40 years old.
Yeah but as a consequence, games (closed source games, which means basically all of them) don’t even bother targeting Linux.
I assume Flatpak fixes this by locking your app to a compatible version of glibc.
Surprisingly, that seems correct—a Flatpak bundle includes a glibc; though that only leaves me with more questions:
- On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space. In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
- On the other hand, a number of system services on linux-gnu depend on loading host libraries. Even if we ignore NSS (or exile it into a separate server process as it should have been in the first place), that leaves accelerated graphics: whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism). These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
- Modern toolkits basically live on accelerated graphics. Flatpak was created to distribute graphical applications built on modern toolkits.
- ... Wait, what?
There is no 'system' glibc. Linux doesn't care. The Linux kernel loads up the ELF interpreter specified in the ELF file based on the existing file namespace. If that ELF interpreter is the system one, then linux will likely remap it from existing page cache. If it's something else, linux will load it and then it will parse the remaining ELF sections. Linux kernel is incredibly stable ABI-wise. You can have any number of dynamic linkers happily co-existing on the machine. With Linux-based operating systems like NixOS, this is a normal day-to-day thing. The kernel doesn't care.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the system either.
No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space. User-space may use an older version of the API, but it will still work.
> whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism)
OpenGL is even more straightforwards because it is typically consumed as a dynamically loaded API, thus as long as the symbols match, it's fairly straightforwards to replace the system libGL.
5 replies →
> On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space.
Yes
> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.
But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.
> hether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism).
In Wayland, your app tells the server to render a bitmap. How you got that bitmap rendered is up to you.
The optimization is that you send dma-buf handle instead of a bitmap. This is a kernel construct, not userspace driver one. This allows also cross-API app/compositor (i.e. Vulkan compositor and OpenGL app, or vice-versa). This also means you can use different version of the userspace driver with compositor than inside the container and they share the kernel driver.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.
> but the previous point means that you can’t load them from the host system either.
You actually can -- bind mount that single binary into the container. You will use binary from the host, but load it using ld.so from inside container.
2 replies →
Does graphics on Linux work by loading the driver into your process? I assumed it works via writing a protocol to shared memory in case of Wayland, or over a socket (or some byzantine shared memory stuff that is only defined in the Xorg source) in case of X11.
From my experience, if you have the kernel headers and have all the required options compiled into your kernel, then you can go really far back and build a modern glibc and Gtk+ Stack, and use a modern application on an old system. If you do some tricks with Rpath, everything is self-contained. I think it should work the other way around, with old apps on a new kernel + display server, as well.
2 replies →
Static linking -- always the ready-to-go response for anything ABI-related. But does it really help? What use is a statically linked glibc+Xlib when your desktop no longer sets resolv.conf in the usual place and no longer speaks the X11 protocol (perhaps in the name of security) ?
I guess that kind of proves the point that there is no "stable", well, anything on Linux. Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.
/etc/sysctl.conf is a good example; on some systems it just works, on some systems you need to enable a systemd service thingy for it, but on some systems the systemd thingy doesn't read /etc/sysctl.conf and only /etc/sysctl.d.
So a simple "if you're running Linux, edit /etc/sysctl.conf to make these changes persist" has now become a much more complicated story. Writing a script to work on all Linux distros is much harder than it needs to be.
3 replies →
Even statically linked, the problems you just described are valid. The issue is x11 isn’t holding up and no one wants to change. Wayland was that promise of change but has taken 15+ years to develop (and still developing).
Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.
Deepin desktop and elementary are on the top of my list for elegance and ease of use. Apps and games need a solid ABI and this back and forth between gnome and kde doesn’t help.
With so many different wm’s and desktop environments, x11 is still the only method of getting a window with an opengl context in any kind of standard way. Wayland, X12, whatever it is, we need a universal ABI for window dressing for Linux to be taken seriously on the desktop.
16 replies →
…also locking in any security vulnerabilities.
I mean, we are talking about videogames here.
3 replies →
If you're never going to update your program and don't care about another heartbleed effecting your product and users, then sure.
Would you rather run a statically linked go or rust binary with the native crypto implementations of ssl or a dynamically linked OpenSSL that is easier to upgrade? (Honest question)
The glibc libs should be ELF clean. Namely, a pure and simple ELF64 dynamic exe, should be able to "libdl"/dynamically load any glibc lib. It is maybe fixed and possible in latest glibc.
The tricky part is the sysv ABI for TLS-ized system variables: __tls_get_addr(). For instance errno. It seems the pure and simple ELF64 dynamic exe would have to parse the ELF headers of the dynamically loaded shared libs in order to get the "offsets" of the variables. Never actually got into this for good though.
And in the game realm, you have c++ games (I have an extremely negative opinion about this language), and the static libstdc++ from gcc does not "libdl"/dynamically load what it requires from the glibc, and it seems even worse, namely it would depends on glibc internal symbols.
Then, if I got it right for TLS-ized variables, dlsym should do the trick. Namely, dlsym will return the address of a variable for the calling thread. Then this pointer can be cached the way the thread wants. On x86_64, it can "optimize" the dlsym calls by reusing the same address for all threads, namely 1 call is enough.
Now the real pain is this static libstdc++ not libdl-ing anything, or worse expecting internal glibc symbols (c++ ...)
glibc != Linux
a better analogy would be targeting the latest version of MSVCRT that happens to be installed on your system (instead of bundling it)
... also which mostly works but sometimes breaks
Windows has switched from app-redistributed MSVC runtimes to OS-distributed "universal CRT" since Windows 10 (2015). Unlike MSVCRT, uCRT is ABI-stable.
Nearly 99% of Linux software is linked to glibc.
What the f are you talking about?
This is not MSVCRT by a long shot.
I was going to make a joke about a.out support (and all the crazy stuff that enables, like old SCO binaries) but apparently a.out was removed in May as an option in the Linux kernel.
https://lwn.net/Articles/895969/
At least we still have WINE.
One way to achieve similar results on Linux might be for the Linux kernel team to start taking control of core libraries like x11 and wayland, and to extend the same "don't break userspace" philosophy to them also. That isn't going to happen, but I can dream!
There was a period where a Linux libc was maintained, but it was long-ago deprecated in favour of glibc. Perhaps that was a mistake.
My understanding is that the Linux devs like only having "Linux" only be the kernel; if they wanted to run things BSD-style (whole base system developed as a single unit) I assume they would have done that by now (it's been almost 30 years).
I'm not sure about taking over the entire GUI ecosystem but I certainly do want more functionality in the kernel instead of user space precisely because of how stable and universal the kernel is. I want system calls, not C libraries.
"DT_HASH is not part of the ABI" is like saying "DNS is not part of the Internet".
Maybe a counterpoint is "x86-64 Linux ABI Makes a Pretty Good Lingua Franca" [0] from αcτµαlly pδrταblε εxεcµταblε of Aug 2022.
0. https://justine.lol/ape.html
Stable APIs on linux: https://developer.mozilla.org/en-US/docs/Web/API
I used to disagree with this browser-as-OS mentality, but seeing as it's sandboxed and supports webGL, wasm, WebRTC, etc. I find it pretty convienient (if I'm forced to run zoom for example, I can just keep it in the browser). Just as long as website/application vendors test their stuff across different browsers.
At this point I'm pretty convinced that no-one at Microsoft ever did a better job in keeping people on Windows than what the maintainers of glibc are doing …
Well, the WSL team did a lot I think (including new Terminal). WSL, WSL2, WSLg, WSA - I almost never use full Linux VMs now, my pretty simple needs are covered with it.
Not news. In fact, wine/proton really is the preferred wayof doing things.
Valve saw the light years but they weren't the first. Even Carmack has been saying it before the whole gaming on Linux thing became viable.
The change that caused the break would be equivalent to the PE file format changing in an incompatible way on Windows, to give an idea of how severe it is.
Dynamically-linked userspace C is a smouldering trash heap of bad decisions.
You'd be better off distributing anything--anything--else than dynamically-linked binaries. Jar files, statically-linked binaries, C source code, Python source code, Brainfuck code ffs...
The "./configure and recompile from source" model of Linux is just too deeply entrenched. Pity.
like Excel is more stable than Windows itself, you can open a spreadsheet from the win16 days and it'll just work....
Personal experience: Office 2021 and Office 97 do not paginate a DOC file created (by Microsoft employees) in Office 97 the same way, so the table of contents ends up different.
Nope, had multiple that did even work after the latest update
Heh, I've just been debugging an issue that is triggered by the upgrade to glibc 2.29 (Debian bullseye era).
https://github.com/pst-format/libpst/issues/7
As a gamedev that tried shipping to linux, we really need some standardized minimized image to target with ancient glibc and such, and some guarantees that if it runs on the image that it runs on future linux versions.
Just target Flatpak. You got a standardised runtime, and glibc is included in the container. If it works on your machine, it'll work on my machine, since the only difference will be the kernel and Linus is pretty adamant in retaining compatibility at the syscall level.
Sidenote. I remember when Warcraft 3 ran better in Wine+Debian than in Windows. Athlon II x2 CPU and Nvidia Geforce 6600 GT with a wooping 256MB of VRAM. That was one hot machine. Poor coolers.
I tried to run WoW and Starcraft 2 with Wine, and it did not install/run
Yeah Linus's "we don't break user space" is a joke.
Great, the kernel syscalls API is stable. Who cares, if you can't run a 7 year old binary because everything from vdso to libc to libstc++ to ld-linux.so is incompatible.
Good luck. No it's not just a matter of LD_LIBRARY_PATH and shipping a binary with a copy. That only helps with third party libs, and only if vdso and ld-linux is compatible.
My 28 year experience running Linux is that it's API (source code) unbroken, but absolutely not ABI.
Linus does provide a stable ABI with Linux, it's GNU who drops the ball and doesn't. You're criticizing Linus for something he has nothing to do with. What's the point in that?
Linus limited his scope to something that doesn't matter for users.
I think this is a valid criticism.
It's admirable to do the dishes, but the house is also on fire, so nobody will be able to enjoy the clean plates, so what's then even the point of doing the dishes?
In fact, in this analogy he could have saved the kitten instead of done the dishes.
Err, back from analogy land: ABI stability makes it harder to make things better, improving and replacing APIs. This is expected. But here we are in the worst of both worlds. Thanks to the kernel we are slowed down in improvements, and thanks to kinda-userspace (i.e. vdso & ld-linux), and userspace infra (libc, libstdc++, libm) we don't have ABI compatability either.
So it's lose-lose.
Linus chose to only care about the kernel. So there's possibly some fault there.
3 replies →
I wrote a game for my masters thesis in 2008. I wrote it in C++ and targeted Linux. Recently i tried to run it and not only binaries didn't work (that's a given), but even making it compile was a struggle cause I used some gui libraries that were abandoned and there was no version working with modern libc. It was easier to port the game to windows than to make it compile on Linux again...
Proprietary devs should use static linking (with musl) or chroots/containers. What makes the author think they are the target audience of glibc?
Thanks, but I think I'll stick with Windows: their target audience is famously everyone and for an unlimited time.
Have fun with libGL!
I hadn't thought of that... Flatpaks let you use specific mesa versions, though.
Linux has a more stable windows ABI then windows itself. If a game stops working on windows it will likely still work with wine on Linux.
In my opinion, Valve+distros should fork glibc and do a glibc distribution that focuses in absolute stability.
Didn't glibc devs said that distros have the freedom to choose what to maintain to not break their applications? This would be just a collaboration between the distros to maintain the stability.
I suppose that Win32 can be helpful if you want to make programs that run on both Windows and on Linux (and also ReactOS, I suppose), but might not work as well for programs with Linux specific capabilities.
(Also, I have problems to install Wine, due to package manager conflicts.)
There are other possibilities, such as .NET (although some comments in here says it is less stable, some says it works), which also can be used on Windows and on Linux. There is also HTML, which has too many of its own problems and I do not know how stable HTML really is, either. And then, also Java. Or, you can write a program for DOS, NES/Famicom, or something else, and can be emulated in many systems. (A program written for NES/Famicom might well run better on many systems than a native code does, especially if you do not do something too tricky in the code (in which case some implementations might not be compatible).) Of course also the different ways they have different advantages and disadvantages, with compatibility, efficiency, capability, and other features.
I laugh so hard... with tears! But, to be fair, Unreal 2004 still works almost perfectly on not too obsolete Ubuntu. Or did I have to do some glibc trickery?.. Can't remember for sure.
If anything, not breaking things makes you more careful about what you put in. I feel like that's not a bad rule to go by.
what about the web browser. Isn't that also a stable ABI ? Or its not a "Binary interface" because it only support Javascript?
What about web assembly ?
If web browsers had any kind of stable interface we wouldn't have: https://caniuse.com/, polyfils, vendor CSS prefixes and the rest of crutches. WASM isn't binary. But that's all irrelevant when we talk about ABI.
ABI is specifically binary interface between binary modules. For example: my hello_world binary and glibc or glibc and linux kernel or some binary and libsqlite3.
The kernel <> userspace API is stable, famously so. Dynamic linking to glibc is a terrible idea, statically link your binaries against musl and they'll still work in 100 years.
game binaries need to dynamically load system libs. A statically linked binary would have to include a full ELF loader.
Trying to statically link with glibc throws specific warnings that certain calls aren't portable.
With musl? No such problem.
Fuck even uclibc is more portable than glibc and its a dead project afaik
> With musl? No such problem.
Does musl even implement the functionality glibc was warning about?
OK, glibc ABI stability may not be perfect, but is there any evidence that Wine is any better? That sounds implausible to me.
The difference is if Wine breaks an application that works on Windows, it's considered a bug that should be fixed, regardless of why.
oh, no, not again: kids working for big tech constantly, but randomly, deprecating, removing and breaking apis/abis/features in the kernel/libraries/everywhere. I honestly belive that all relationships between big tech companies and opensource are toxic and follow the microsoft principle of the embrace, extend, and extinguish.
It's not, and it is super sad to hear people advocating for such horrible idea
Linux being infested by windows is the beginning of its death to me, what a tragedy
A well deserved death after the system-d drama anyways
Why is it a horrible idea?
This is by design, and everybody should be aware of that. I don't know about glibc, but as far as the kernel is concerned, Linus has never guaranteed ABI stability. API, on the other hand, is extremely stable, and there are good reasons for that.
In Windows, software is normally distributed in a binary form, so having ABI compatibility is a must.
Uh, the kernel ABI is extremely stable. You could take a binary that's statically compiled in the 90s and run it on the latest release of the kernel. "Don't break userspace" is Linus's whole schtick, and he's taking about in terms of ABI when he says that.
This is about the ABIs of userspace "system-level" libraries, glibc in particular.
The kernel absolutely does guarantee a stable userspace ABI. This post is entirely about other userspace libraries.
The Linux kernel maintains userspace API/ABI compatibility forever but inside the kernel (e.g. modules) there is no stable API/ABI.