Surprisingly, that seems correct—a Flatpak bundle includes a glibc; though that only leaves me with more questions:
- On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space. In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
- On the other hand, a number of system services on linux-gnu depend on loading host libraries. Even if we ignore NSS (or exile it into a separate server process as it should have been in the first place), that leaves accelerated graphics: whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism). These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
- Modern toolkits basically live on accelerated graphics. Flatpak was created to distribute graphical applications built on modern toolkits.
There is no 'system' glibc. Linux doesn't care. The Linux kernel loads up the ELF interpreter specified in the ELF file based on the existing file namespace. If that ELF interpreter is the system one, then linux will likely remap it from existing page cache. If it's something else, linux will load it and then it will parse the remaining ELF sections. Linux kernel is incredibly stable ABI-wise. You can have any number of dynamic linkers happily co-existing on the machine. With Linux-based operating systems like NixOS, this is a normal day-to-day thing. The kernel doesn't care.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the system either.
No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space. User-space may use an older version of the API, but it will still work.
> whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism)
OpenGL is even more straightforwards because it is typically consumed as a dynamically loaded API, thus as long as the symbols match, it's fairly straightforwards to replace the system libGL.
I know, I both run NixOS and have made syscalls from assembly :) Sorry, slipped a bit in my phrasing. In the argument above, instead of “the system glibc” read “the glibc targeted by the compiler used for the libGL that corresponds to the graphics driver loaded into the running kernel”. (Unironically, the whole point of the list above was to avoid this sort of monster, but it seems I haven’t managed it.)
This is all correct and I'd also add that ld.so doesn't need to have any special knowledge of glibc (or the kernel) in the first place. From the POV of ld.so, glibc is just another regular ELF shared object that uses the same features as everything else. There's nothing hard-coded in ld.so that loads libc.so.6 differently from anything else. And the only thing ld.so needs to know about the kernel is how to make a handful of system calls to open files and mmap things, and those system calls that have existed in Linux/Unix for eternity.
> On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space.
Yes
> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.
But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.
> hether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism).
In Wayland, your app tells the server to render a bitmap. How you got that bitmap rendered is up to you.
The optimization is that you send dma-buf handle instead of a bitmap. This is a kernel construct, not userspace driver one. This allows also cross-API app/compositor (i.e. Vulkan compositor and OpenGL app, or vice-versa). This also means you can use different version of the userspace driver with compositor than inside the container and they share the kernel driver.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.
> but the previous point means that you can’t load them from the host system either.
You actually can -- bind mount that single binary into the container. You will use binary from the host, but load it using ld.so from inside container.
>> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
> Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.
> But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.
I meant that the glibcs are potentially ABI-incompatible both ways, not just that they’ll fight if you try to load both of them at once. Specifically, if the bundled (thus loaded) glibc is old, 2.U, and you try to load a host library wants a new frobnicate@GLIBC_2_V, V > U, you lose, right? I just don’t see any way around it.
>> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
> Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.
My impression of out-of-tree accelerated is mainly from fighting fglrx for the Radeon 9600 circa 2008, so extremely out of date. Intel is in-tree, so I’m willing to believe it has some degree of ABI stability, at least if an i915 blog post[1] is to be believed. Apparently AMD is also in-tree these days. Nvidia is binary-only, so the smart thing for them would probably be to build against an ancient Glibc so that it runs on everything.
But suppose the year is 2025, and a shiny new GPU architecture has come out, so groundbreaking no driver today can even lay down command buffers for it. The vendor is kind enough to provide an open-source driver that gets into every distro, and the userspace portion compiled against a distro-current Glibc ends up referencing an AVX-512 memcpy@GLIBC_3000 (or something).
Does graphics on Linux work by loading the driver into your process? I assumed it works via writing a protocol to shared memory in case of Wayland, or over a socket (or some byzantine shared memory stuff that is only defined in the Xorg source) in case of X11.
From my experience, if you have the kernel headers and have all the required options compiled into your kernel, then you can go really far back and build a modern glibc and Gtk+ Stack, and use a modern application on an old system. If you do some tricks with Rpath, everything is self-contained. I think it should work the other way around, with old apps on a new kernel + display server, as well.
So there are two parts to this: the app producing the image in the application window and then the windowing system combining multiple windows together to form the final image you see on screen.
The former gets done in process (using e.g. GL/vulkan) and then that final image gets passed onto the windowing system which is a separate process and could run outside the container.
As an aside, with accelerated graphics you mostly pass a file descriptor to the GPU memory containing the image, rather than mucking around with traditional shared memory.
Does graphics on Linux work by loading the driver into your process?
Yes, it's called Direct Rendering (DRI) and it allows apps to drive GPUs with as little overhead as possible. The output of the GPU goes into the shared memory so that the compositor can see it.
Static linking -- always the ready-to-go response for anything ABI-related. But does it really help? What use is a statically linked glibc+Xlib when your desktop no longer sets resolv.conf in the usual place and no longer speaks the X11 protocol (perhaps in the name of security) ?
I guess that kind of proves the point that there is no "stable", well, anything on Linux. Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.
/etc/sysctl.conf is a good example; on some systems it just works, on some systems you need to enable a systemd service thingy for it, but on some systems the systemd thingy doesn't read /etc/sysctl.conf and only /etc/sysctl.d.
So a simple "if you're running Linux, edit /etc/sysctl.conf to make these changes persist" has now become a much more complicated story. Writing a script to work on all Linux distros is much harder than it needs to be.
> Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.
Apps were not supposed to open /etc/resolv.conf by themselves. If they did, they are broken. Just because the file is available, transparently, doesn't mean it is not a part of the internal implementation.
Even golang runtime checks nsswitch for known, good configuration before using resolv.conf instead of thunking to glibc.
Even statically linked, the problems you just described are valid. The issue is x11 isn’t holding up and no one wants to change. Wayland was that promise of change but has taken 15+ years to develop (and still developing).
Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.
Deepin desktop and elementary are on the top of my list for elegance and ease of use. Apps and games need a solid ABI and this back and forth between gnome and kde doesn’t help.
With so many different wm’s and desktop environments, x11 is still the only method of getting a window with an opengl context in any kind of standard way. Wayland, X12, whatever it is, we need a universal ABI for window dressing for Linux to be taken seriously on the desktop.
With the rise of WSL, I have a real hard time justifying wanting a linux desktop.
I've got a VM with a full linux distro at my fingertips. Virtualization has gotten more than fast enough. And now, with windows 11, I get an X server integrated with my WSL instance so even if I WANTED a linux app, I can launch it just like I would if I were using linux as my host.
It does suck that the WSL1 notion of "not a vm" didn't take off, but at the same time, when the VM looks and behaves like a regular bash terminal, what more could you realistically want?
> Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.
You seem fixated on the Free Beer misinterpretation of Free Software.
Multiplayer is a thing, where both crashing servers and also attacking other clients (even in non-p2p titles) is not that uncommon. Many titles don't permit community servers any more, of course.
A lot of games have multiplayer functionalities these days. That make them a potential target for RCE and related vulnerabilities. Granted, if you don't play video game as root, the impact should be limited, but it is still something to be aware of.
Would you rather run a statically linked go or rust binary with the native crypto implementations of ssl or a dynamically linked OpenSSL that is easier to upgrade? (Honest question)
Surprisingly, that seems correct—a Flatpak bundle includes a glibc; though that only leaves me with more questions:
- On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space. In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
- On the other hand, a number of system services on linux-gnu depend on loading host libraries. Even if we ignore NSS (or exile it into a separate server process as it should have been in the first place), that leaves accelerated graphics: whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism). These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
- Modern toolkits basically live on accelerated graphics. Flatpak was created to distribute graphical applications built on modern toolkits.
- ... Wait, what?
There is no 'system' glibc. Linux doesn't care. The Linux kernel loads up the ELF interpreter specified in the ELF file based on the existing file namespace. If that ELF interpreter is the system one, then linux will likely remap it from existing page cache. If it's something else, linux will load it and then it will parse the remaining ELF sections. Linux kernel is incredibly stable ABI-wise. You can have any number of dynamic linkers happily co-existing on the machine. With Linux-based operating systems like NixOS, this is a normal day-to-day thing. The kernel doesn't care.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the system either.
No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space. User-space may use an older version of the API, but it will still work.
> whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism)
OpenGL is even more straightforwards because it is typically consumed as a dynamically loaded API, thus as long as the symbols match, it's fairly straightforwards to replace the system libGL.
I know, I both run NixOS and have made syscalls from assembly :) Sorry, slipped a bit in my phrasing. In the argument above, instead of “the system glibc” read “the glibc targeted by the compiler used for the libGL that corresponds to the graphics driver loaded into the running kernel”. (Unironically, the whole point of the list above was to avoid this sort of monster, but it seems I haven’t managed it.)
> No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space.
[laughs in Nvidia]
1 reply →
This is all correct and I'd also add that ld.so doesn't need to have any special knowledge of glibc (or the kernel) in the first place. From the POV of ld.so, glibc is just another regular ELF shared object that uses the same features as everything else. There's nothing hard-coded in ld.so that loads libc.so.6 differently from anything else. And the only thing ld.so needs to know about the kernel is how to make a handful of system calls to open files and mmap things, and those system calls that have existed in Linux/Unix for eternity.
1 reply →
> On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space.
Yes
> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.
But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.
> hether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism).
In Wayland, your app tells the server to render a bitmap. How you got that bitmap rendered is up to you.
The optimization is that you send dma-buf handle instead of a bitmap. This is a kernel construct, not userspace driver one. This allows also cross-API app/compositor (i.e. Vulkan compositor and OpenGL app, or vice-versa). This also means you can use different version of the userspace driver with compositor than inside the container and they share the kernel driver.
> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.
> but the previous point means that you can’t load them from the host system either.
You actually can -- bind mount that single binary into the container. You will use binary from the host, but load it using ld.so from inside container.
>> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.
> Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.
> But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.
I meant that the glibcs are potentially ABI-incompatible both ways, not just that they’ll fight if you try to load both of them at once. Specifically, if the bundled (thus loaded) glibc is old, 2.U, and you try to load a host library wants a new frobnicate@GLIBC_2_V, V > U, you lose, right? I just don’t see any way around it.
>> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.
> Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.
My impression of out-of-tree accelerated is mainly from fighting fglrx for the Radeon 9600 circa 2008, so extremely out of date. Intel is in-tree, so I’m willing to believe it has some degree of ABI stability, at least if an i915 blog post[1] is to be believed. Apparently AMD is also in-tree these days. Nvidia is binary-only, so the smart thing for them would probably be to build against an ancient Glibc so that it runs on everything.
But suppose the year is 2025, and a shiny new GPU architecture has come out, so groundbreaking no driver today can even lay down command buffers for it. The vendor is kind enough to provide an open-source driver that gets into every distro, and the userspace portion compiled against a distro-current Glibc ends up referencing an AVX-512 memcpy@GLIBC_3000 (or something).
I load a flatpak using Gtk3 GtkGLArea from 2015.
What happens?
[1] https://blog.ffwll.ch/2013/11/botching-up-ioctls.html
1 reply →
Does graphics on Linux work by loading the driver into your process? I assumed it works via writing a protocol to shared memory in case of Wayland, or over a socket (or some byzantine shared memory stuff that is only defined in the Xorg source) in case of X11.
From my experience, if you have the kernel headers and have all the required options compiled into your kernel, then you can go really far back and build a modern glibc and Gtk+ Stack, and use a modern application on an old system. If you do some tricks with Rpath, everything is self-contained. I think it should work the other way around, with old apps on a new kernel + display server, as well.
So there are two parts to this: the app producing the image in the application window and then the windowing system combining multiple windows together to form the final image you see on screen.
The former gets done in process (using e.g. GL/vulkan) and then that final image gets passed onto the windowing system which is a separate process and could run outside the container.
As an aside, with accelerated graphics you mostly pass a file descriptor to the GPU memory containing the image, rather than mucking around with traditional shared memory.
Does graphics on Linux work by loading the driver into your process?
Yes, it's called Direct Rendering (DRI) and it allows apps to drive GPUs with as little overhead as possible. The output of the GPU goes into the shared memory so that the compositor can see it.
Static linking -- always the ready-to-go response for anything ABI-related. But does it really help? What use is a statically linked glibc+Xlib when your desktop no longer sets resolv.conf in the usual place and no longer speaks the X11 protocol (perhaps in the name of security) ?
I guess that kind of proves the point that there is no "stable", well, anything on Linux. Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.
/etc/sysctl.conf is a good example; on some systems it just works, on some systems you need to enable a systemd service thingy for it, but on some systems the systemd thingy doesn't read /etc/sysctl.conf and only /etc/sysctl.d.
So a simple "if you're running Linux, edit /etc/sysctl.conf to make these changes persist" has now become a much more complicated story. Writing a script to work on all Linux distros is much harder than it needs to be.
> Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.
Apps were not supposed to open /etc/resolv.conf by themselves. If they did, they are broken. Just because the file is available, transparently, doesn't mean it is not a part of the internal implementation.
Even golang runtime checks nsswitch for known, good configuration before using resolv.conf instead of thunking to glibc.
2 replies →
Even statically linked, the problems you just described are valid. The issue is x11 isn’t holding up and no one wants to change. Wayland was that promise of change but has taken 15+ years to develop (and still developing).
Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.
Deepin desktop and elementary are on the top of my list for elegance and ease of use. Apps and games need a solid ABI and this back and forth between gnome and kde doesn’t help.
With so many different wm’s and desktop environments, x11 is still the only method of getting a window with an opengl context in any kind of standard way. Wayland, X12, whatever it is, we need a universal ABI for window dressing for Linux to be taken seriously on the desktop.
With the rise of WSL, I have a real hard time justifying wanting a linux desktop.
I've got a VM with a full linux distro at my fingertips. Virtualization has gotten more than fast enough. And now, with windows 11, I get an X server integrated with my WSL instance so even if I WANTED a linux app, I can launch it just like I would if I were using linux as my host.
It does suck that the WSL1 notion of "not a vm" didn't take off, but at the same time, when the VM looks and behaves like a regular bash terminal, what more could you realistically want?
13 replies →
> Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.
You seem fixated on the Free Beer misinterpretation of Free Software.
1 reply →
…also locking in any security vulnerabilities.
I mean, we are talking about videogames here.
Multiplayer is a thing, where both crashing servers and also attacking other clients (even in non-p2p titles) is not that uncommon. Many titles don't permit community servers any more, of course.
Wasn't it Elden Ring or another From Software game that had a RCE ? This article talks about it : https://wccftech.com/dark-souls-rce-exploit-fixed-elden-ring...
A lot of games have multiplayer functionalities these days. That make them a potential target for RCE and related vulnerabilities. Granted, if you don't play video game as root, the impact should be limited, but it is still something to be aware of.
1 reply →
If you're never going to update your program and don't care about another heartbleed effecting your product and users, then sure.
Would you rather run a statically linked go or rust binary with the native crypto implementations of ssl or a dynamically linked OpenSSL that is easier to upgrade? (Honest question)