← Back to context

Comment by oppositelock

3 years ago

Many moons ago, one of the things I did was to port the Windows version of Google Earth to both Mac and Linux. I did the mac first, which was onerous, because of all the work involved in abstracting away system specific API's, but once that was done, I thought Linux would be a lesser task, and we hired a great linux guy to help with that.

Turns out, while getting it running on linux was totally doable, getting it distributed was a completely different story. Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base? The first approach is a lot of work, and suffers from breakages from time to time, and you alienate users not on your list of supported distros. The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

End result; management canned the Linux version because too much ongoing support work was required, and no matter what you did, you got hate mail from Gentoo users.

> Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that?

I build on an distro with an old enough glibc following this table: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa... (right now rockylinux:8 which is equivalent to centos:8 and good enough for debian stable and anything more recent than that ; last year I was still on centos:7), use dlopen as much as possible instead of "normal" linking and then it works on the more recent ones without issues.

  • I worked on a product that shipped as a closed source binary .so (across four OSes and two architectures) for almost seven years, and that's exactly what we did too — build on the oldest libc and kernel any of your supported distros (or OS versions) support, statically link as much as you can, and be defensive about _any_ runtime dependencies you have.

  • If what you're doing works for you, great, but in case it stops working at some point (or if for some reason you need to build on a current-gen distro version), you could also consider using this:

    https://github.com/wheybags/glibc_version_header

    It's a set of autogenerated headers that use symbol aliasing to allow you to build against your current version of glibc, but link to the proper older versioned symbols such that it will run on whatever oldest version of glibc you select.

    • glibc 2.34 has a hard break where you cannot compile with 2.34 and have it work with older glibc versions even if you use those version headers. It will always link __libc_start_main@GLIBC_2.34 (it's some kind of new security hardening measure, see https://sourceware.org/bugzilla/show_bug.cgi?id=23323).

      Since additionally you also need to build all your dependencies with this same trick, including say libstdc++, it's really easiest to take GP's advice and build in a container with the old library versions. And nothing beats being able to actually test it on the old system.

> We need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base?

When I worked on mod_pagespeed we went with the first approach, building an RPM and a DEB. As long as we built on the oldest still-supported CentOS and Ubuntu LTS, 32-bit and 64-bit, we found that our packages worked reliably on all RPM- and DEB-based distros. Building four packages was annoying, but we automated it.

(We also distributed source, so it may be that it didn't work for some people and they instead built from source. But people would usually ask questions on https://groups.google.com/g/mod-pagespeed-discuss before resorting to that I don't think I saw this issue.)

FWIW, these days Valve tries to solve same problems with their steam runtime[0][1]. Still doesn't seem easy, but looks like almost workable solution.

[0] https://github.com/ValveSoftware/steam-runtime

[1] https://archive.fosdem.org/2020/schedule/event/containers_st...

  • A multi billion dollar company with massive investments in Linux making an almost workable solution means everyone else is screwed

    • Nope. Valve has to deal with whatever binaries clueless developers uploaded over the years which they can't update wheres you only need to leant how to make your one binary portable. Entirely different issues.

    • .NET5+ its my choice as a SME with this challenge. I run the same code across every device and only support what MS supports. These days u could likely redo this with a webview and wasm... Let the webview handle the graphics abstraction for you!

      1 reply →

Was static linking not enough?

I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

It is possible, though, that whatever you statically link doesn't work with the running kernel, of course. And there are a lot of variants out there; every distribution has their own patch cadence. (A past example of this was the Go memory corruption issue from 1.13 on certain kernels. 1.14 added various checks for distribution + kernel version to warn people of the issue, and still got it wrong in several cases. Live on the bleeding edge, die on the bleeding edge.)

  • > I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

    Could it work with gcompat? Alpine has it in the community repo.

    https://git.adelielinux.org/adelie/gcompat

  • Static linking against MUSL only makes sense for relatively simple command line tools. As soon as 'system DLLs' like X11 or GL are involved it's back to 'DLLs all the way down'.

  • > Was static linking not enough?

    It is a GPL violation when non-GPL software does it.

    • glibc is LGPL, not GPL, so it wouldn't be a violation as long as you provided its source code and ability to replace it by the user, for example by providing compiled object files for relinking. And musl is MIT, so no problems there either.

How do Firefox and Blender do it? They just provide compressed archives, which you uncompress into a folder and run the binary, no problem. I myself once had to write a small CLI program in Rust, where I statically linked musl. I know, can't compare that with OpenGL stuff, but Firefox and Blender do use OpenGL (and perhaps even Vulkan these days?).

  • with difficulty, and not that well. for example, firefox binaries require gtk built with support for X, despite only actually using wayland at runtime if configured. the reason why people generally don't complain about it is because if you have this sort of weird configuration, you can usually compile firefox yourself, or have it compiled by your distro. with binary-only releases, all complaints (IMO justifiably) go to the proprietary software vendors.

    • I’m happy to blame the OS vendors for not creating a useable base env, I think that’s one of the core tenants for an OS and not providing it is a problem. It may be easier and may push an ideological agenda but I don’t think it’s the right thing to do.

  • Firefox has a binary they ship in a zip which is broken but they also officially ship a Flatpak which is excellent.

> The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

That was a shame. There was a lot of hope for LSB, but in the end the execution flopped. I don't know if it would have been possible to make it succeed.

  • So this sort of bleeds in to the Init Wars, but there's a lot of back and forth about whether LSB flopped or was deliberately strangled by a particular player in the Linux ecosystem.

I guess this is another instance of Windows and Mac OS are operating systems. "Linux" is a kernel, powering multiple different operating systems.

It is important to note that this comment is from a time before snaps, flatpaks and AppImages.

  • Yesterday I tried to install an Inkscape plugin I have been using for a long time. I upgraded my system and the plug-in went away. So I download the zip, open Inkscape, open the plugin manager, go to add the new plugin by file manager… and the opened file manager is unable to see my home directory (weird because when opening Inkscape files it can see home, but when installing extensions it cannot). It took some time to figure out how to get the downloaded file in to a folder the Inkscape snap could see. Somehow though I still could not get it installed. Eventually I uninstalled the snap and installed the .deb version. That worked!

    Recently I downloaded an AppImage for digiKam. It immediately crashed when trying to open it, because I believe glibc did not work with my system version (a recent stable Ubuntu).

    Last week I needed to install a gnome extension. The standard and seemingly only supported way of doing this is to open a web page, install a Firefox extension, and then click a button on the web page to install it. The page told me to install the Firefox extension and that worked properly. Then it said Firefox didn’t have access to the necessary parts of my file system. It turns out FF is a snap and file system access is limited, so the official way of installing the gnome extension doesn’t work. I ended up having to download and install chrome and install the gnome extension from there.

    These new “solutions” have their own problems.

    • Gnome's insistence on using web pages for local configuration settings is the dumbest shit ever. It's built on top of a cross platform GUI library but instead of leveraging that they came up with a janky system using a browser extension where you're never 100% sure you're safe from an exploit.

      2 replies →

>The first approach is a lot of work, and suffers from breakages from time to time

Are there any distros that treat their public APIs as an unbreakable contract with developers like what MS does?

  • RedHat claims or at least claimed that for EL. I think it’s limited to within minor releases though, with majors being API.

    That’s fine if you’re OK relying on their packages and 3rd party “enterprise” software that’s “certified” for the release. No one in their right mind would run RHEL on a desktop.

    The most annoying to me was that RHEL6 was still under support and had an ancient kernel that excluded running Go, GRaalVM, etc. static binaries. No epoll() IIRC.

    Often times you find yourself having to pull more and more libraries into a build. It all starts with wanting a current Python and before you know it you’re bringing in your own OpenSSL.

    And they have no problem changing their system management software in patch releases. They’ve changed priority of config files too many times. But that’s another rant for another day.

    This is a place where I wish some BSD won out. With all the chunks of the base userspace + kernel each moving in their own direction it’s impossible to get out of this place. Then add in every permutation of those pieces from the distros.

    Multiple kernel versions * multiple libc implementations * multiple inits * …

    I’d never try to make binary-only software for Linux. Dealing with packaging OSS is bad enough.

    • > No one in their right mind would run RHEL on a desktop.

      glances nervously at my corporate-issued ThinkPad running RHEL 8

    • > No one in their right mind would run RHEL on a desktop.

      I worked somewhere where we ran CentOS on the desktop. That seemed to work pretty well. I don't see why RHEL would be any worse, apart from being more expensive.

      5 replies →

    • > No one in their right mind would run RHEL on a desktop.

      Err.... yes we do? It's a development base I know isn't going to change for a long time, and I can always flatpak whatever applications I need. Hell, now that RHEL 9 includes pipewire I put it on my DAW/DJ laptop.

  • No, no one does. It's a lot more work to maintain all public APIs and their behavior for all time; it can often prevent even fixing bugs, if some apps come to depend on the buggy behavior. Microsoft would occasionally add API parameters/ options to let clients opt-in to bug fixes, or auto-detect known-popular apps and apply special bug-fix behaviors just for them.

    Even Apple doesn't make that level of "unbreakable contract" commitment. Apple will announce deprecations of APIs with two or three years of opportunity to fix them. If apps don't upgrade within the timeframe, they just stop working in newer versions of macOS.

    • Most Apple-deprecated API stick around rather than “just stop working in newer versions of macOS.” Binary compatibility is very well-maintained over the long term.

Are these Linux app distribution problems solved by using Flatpak?

  • Most of them are, yes. AppImage also solves this, but doesn't have as robust of an update/package management system

    • AppImage is basically a fancy zip file. It's still completely up to you to make sure the thing you put in the zip file will actually run on other people's system.

In that context, cannot the issue be sidestepped entirely by statically linking[1] everything you need?

AFAIK the LGPL license even allows you to statically link glibc, as long as you provide a way for your user to load their own version of the libs by themself if that's what they want.

[1]: (or dlopening libs you bundle with your executable)

Ricers wanna rice! Can we spin the globe so fast that it breaks apart?

Would the hate mails also have included 'internal' users, say from Chromium-OS?

Another approach might be a hybrid, with a closed-source binary "core", and open-source code and linkage glue between that and OS/other libraries. And an open-source project with one-or-few officially-supported distributions, but welcoming forks or community support of others.

A large surface area app (like Google Earth?) could be less than ideal for this. But I've seen a closed-source library, already developed internally on linux, with a small api, and potential for community, where more open availability quagmired on this seemingly false choice of "which distributions would we support?"

> Due to IP reasons, this can't ship as code, so we need to ship binaries.

Good, it should be as difficult as possible, if not illegal, to ship proprietary crap to Linux. The operating system was always intended to be Free Software. If I cannot audit the code, it’s spyware crap and doesn’t belong in the Linux world anyway.

Loki managed to release binaries for Linux long before Google Earth was a thing. I'm not going to claim that things are/were perfect but you never needed to support each distro individually: Just ship your damned dependencies except for base system stuff like libc and OpenGL which does provide pretty good backwards compatibility so you only need to target the oldest version you want to support and it will work on newer ones as well.

And then you have many devs complaining as to why MS doesn’t want to invest time on MAUI for Linux. This is why.

Flatpak solved this issue. You use a "runtime" as the base layer, similar to the initial `FROM` in Dockerfiles. Flatpak runs then the app in a containerized environment.