Dirtyfrag: Universal Linux LPE

13 hours ago (openwall.com)

This is very similar in root cause and exploitation to Copy Fail.

Which illustrates pretty well something that's lost when relying heavily on LLMs to do work for you: exploration.

I find that doing vulnerability research using AI really hinders my creativity. When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby. It's like a genie - you get exactly what you asked for and nothing more.

The researcher who discovered Copy Fail relied heavily on AI after noticing something fishy. If he had to manually wade through lots of code by himself, he would have many more chances to spot these twin bugs.

At the same time, I'm pretty sure that by using slightly less directed prompting, a frontier LLM would found these bugs for him too.

It's a very unusual case of negative synergy, where working together hurt performance.

  • > When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

    Very much aligns with my experience. For me this is the most unsatisfying thing about AI-based workflows in general, they miss stuff humans would never miss.

    All the time I wonder what am I missing that's right nearby? It's remarkable how many times I have to ask Claude code to fully ingest something before it actually puts it into context. It always tries to laser through to target it's looking for, which is often not what you want it to look for, at least not all you want it to look for. Getting these models to open up their field of vision is tough.

    • Do you think this is inherent or an artifact of prompting? Curiosity and side quests leads to higher token usage and longer time to finish, so I could understand why current harnesses and system prompts would not encourage that sort of thing.

      But what if a coding agent was prompted to be more curious during development? Like a human developer, make mental notes of alternatives to try out and chase suspicious looking code which may seem unrelated to the task at hand. It could even spawn rabbit hole agents in parallel.

      Taking a step back, this probably highlights major hazard with the increased usage of LLMs for coding, which is that everyone's style of work is going to converge because most code will be written by the 2-3 most popular models using the same system prompts.

      1 reply →

  • No, unless I'm misreading it it's the *same* root cause: high 32 bits of Extended ESN in IPsec == authencesn module/cipher mode.

    The wrong thing got fixed for copy.fail, because people jumped to blame AF_ALG.

    [ed.: yes it's the same authencesn issue. https://github.com/V4bel/dirtyfrag/blob/892d9a31d391b7f0fccb... it doesn't say authencesn in the code, only in a comment, but nonetheless, same issue.]

    [ed.2: the RxRPC issue is separate, this is about the ESP one]

    • There are two vulnerabilities here.

      The RxRPC one is definitely a different root cause (although caused by a very similar mistake).

      For the ESP one it's a bit harder to tell. I don't think the wrong thing was fixed, just that there was a very similar bug in almost the same spot. Could be wrong about that though.

      1 reply →

  • Just on a side note. Negative synergy does not seem so uncommon with machine learning. We did some research maybe 10 yrs ago an human/ML based duplicate detection (for a municipal support ticket system) . Research showed that pure AI and pur human outperformed co-working. Human oversight often e.g. overcorrected machine work. I think it is a nice HCI problem to solve actually to amplify creativity and unique skills in such processes. Particularly if they can be to some degree repetitive and tiresome.

  • Or a follow up prompt: "find similar classes of bugs". Once the actual case has been layed out finding like bugs isn't too hard. I hear you on the creativity bit. Like any tool, AI can put blinders on. Using it to augment without it fully taking over your workflow is tough.

    • Not just like any tool though. Interacting with agents can be incredibly boring and frustrating in a way that I personally do not experience with other technology

  • I don't follow. LLMs spotted these bugs in the first place. You seem to be saying that these discoveries are indications that they're bad for vulnerability discovery.

    • From what I understand, the copy fail bug was found by researcher who noticed something weird and then using AI to scan the codebase for instances where that becomes a problem.

      I bet that with a slightly looser prompt/harness, the LLM could have found these twin bugs too.

      Yet at the same time, I also think that if the human researcher had manually scanned the code, he'd have noticed these bugs too.

      FWIW I do think LLMs are great tools for finding vulnerabilities in general. Just that they were visibly not optimally applied in this case.

      1 reply →

    • I don't think the copy.fail people understood the issue they found, as is evident by the heavy focus on AF_ALG/aead_algif, which is essentially "innocent" as we're seeing here.

      I think LLMs are great for vulnerability discovery, but you need to not skimp on the legwork and understanding what even you just found there.

      8 replies →

    • I don’t think that’s what the OP is saying at all, just that using LLMs needs to be a cooperative research process.

      Also I see you jumping around a lot to the defense of LLMs when I don’t think anyone is really attacking them. Maybe cool it a bit.

      2 replies →

    • Right. Finding the bug is in itself a win. It seems we’re jumping from that spend-electricity-to-find-bugs win to arguing about how some things around it are not quite good or comfy.

    • It’s incredible humans spot stuff like this. I guess even more incredible that LLMs can do it!

  • It’s very hard to see a root vuln similar to, but not the same as, another discovered by AI, as a lesson about AI not exploring.

    Is there a counterfactual where you would say it explored well enough, besides both vulnerabilities published as one?

  • > When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

    That's why is very very important to just step out and use saved time to go for a walk, to a park, sit on a bench, listen do birds, close eyes and zoom out.

    The state we are in is actually brilliant.

  • These are all page cache poisoning attacks (dirtyfrag, copyfail, dirtypipe). Maybe the page cache should have defense-in-depth measures for SUID binaries?

    • SUID mitigations have nothing to do with the vulnerability itself - just the exploit.

      If there's a root cronjob that runs a world readable binary, you could modify it in the page cache and exploit it that way.

      Modifying the page cache is a really strong primitive with countless ways to exploit it.

      8 replies →

"Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities."

link: https://github.com/V4bel/dirtyfrag

detailed writeup: https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...

importantly:

"Copy Fail was the motivation for starting this research. In particular, xfrm-ESP Page-Cache Write in the Dirty Frag vulnerability chain shares the same sink as Copy Fail. However, it is triggered regardless of whether the algif_aead module is available. In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag."

mitigation (i have not tested or verified!):

"Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution. Use the following command to remove the modules in which the vulnerabilities occur."

    sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true"

conversation around the mitigation suggests you need a reboot or run this after the above on already-exploited machines:

    sudo echo 3 > /prox/sys/vm/drop_caches

  • "sudo" in "sudo echo 3 > /prox/sys/vm/drop_caches" does not do anything because only runs echo, not the write.

    And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.

    • >And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.

      this is more targeted at the people who run the PoC to see if their machine is vulnerable.

      just transcribing some relevant stuff from https://github.com/V4bel/dirtyfrag/issues/1 so that people visiting this thread dont need to poke around a bunch of different places.

  • You can't sudo echo and redirect from the non-sudo shell like that.

        echo 3 | sudo tee /proc/sys/vm/drop_caches
    

    or

        sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
    

    Also fixed your typo in /proc...

  • Is there any additional info on where it was "published publicly by an unrelated third party"? From the timeline in the writeup:

    > 2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.

    > 2026-05-07: Detailed information and the exploit for this vulnerability were published publicly by an unrelated third party, breaking the embargo.

    Edit: nevermind, details are further down in the thread:

    https://news.ycombinator.com/item?id=48055863

  • Just FYI, you can also mitigate it with `echo 1 > ...`; you don't need to drop everything, dropping `1` clears the page cache and that's enough.

    Tested locally on Ubuntu 26.04:

    1. Ran the exploit and got root

    2. Configured the mitigations

    3. Ran `su` again with no parameters and immediately got root again unprompted

    4. Cleared the page cache

    5. `su` asked for a password

And I ask again: why the f*ck is algif_aead getting all the flak for copy.fail? It's authencesn being stupid.

authencesn didn't get fixed. Now we got the results of that, turns out you can access the same (I believe) out of bounds write through plain network sockets.

I wish I thought of that, but I didn't.

[ed.: I'm referring to the through-ESP issue. The RxRPC one is AIUI completely unrelated.]

If this indeed works on all major distributions, I just continue to be amazed by how irresponsible the maintainers are. We're talking about optional kernel functionality that's presumably useful to something like <0.1% of their userbase, but is enabled by default?... why?

This feels like the practice of Linux distros back in 1999 when they'd ship default installs with dozens of network services exposed to the internet. Except it's not 1999 anymore.

  • Distro maintainers blacklisting specific functionality because they believe YAGNI is a pretty big ask. They just don't know who is using what. It's always possible for users to go back and tailor their builds for the stuff they actually want.

    And... I remember the early days of Linux where I ran `make menuconfig` and selected exactly the functionality I wanted in my kernel. I'd... rather not end up back there.

    That said a target for an easy win here is RHEL, which compiles a lot of modules into the kernel rather than leaving them as loadable modules, so the mitigation for e.g. copy fail was impossible. Maybe they could do with a few less of those?

    • You can make precisely the same argument for network services. Who knows, maybe you need telnet and UUCP and NFS and ftpd running on your system?... why should the distro maintainer decide?

      Well, because you probably don't, and it's a security risk, so no need to put millions at risk for the benefit of that one person who wants to tinker with packet radio or whatever. Similarly, it would be prudent for distros to not allow autoloading of modules that are extremely niche while giving a simple way to adjust the settings if you want to. God knows they have plenty of GUI configurators and config files already.

      1 reply →

    • Now I think about it, it's kinda weird if non-root users can cause kernel modules to get loaded, without any hardware changes having happened.

      If the kernel modules for esp4, esp6 and rxrpc aren't loaded - how is it that a non-root attacker can cause them to get loaded?

      1 reply →

    • >Distro maintainers blacklisting specific functionality because they believe YAGNI is a pretty big ask

      We have forgotten what a distro is, and its modern corruption of the concept is now taken as the definition.

      Distributions weren't meant to be competing generic universal bundles of userspace tools in addition to the kernel.

  • There is no way to disable components you think users won't use and not make it incredibly difficult to use the system. I personally would have no way to know what to enable or not enable based on what I want to do, and I've been using this stupid OS for 25 years.

    Linux distro maintainers are the most responsible software maintainers on the planet. Their security practices are miles beyond the stupid programming language package managers, they maintain a select list of packages, vet changes, patch bugs, resolve complex packaging issues, backport fixes, use tiered releases, distribute files to global mirrors, and cryptographically validate all files. And might I remind you, they do all this for free.

  • > irresponsible the maintainers are

    Today it's 0.1%, tomorrow it might become 100%. User demand is hard to anticipate, so it's reasonable to include small features that don't cost a lot to run by default.

    It's not ideal, but you really don't want to prevent user from finishing their task, because maybe then they'll just give you a bad name and switch to another distro.

    That's to say, it's not "irresponsible", it's reasonably maximums (at least trying to be).

  • It’s not enabled by default. It’s an optional module that is loaded on demand. The entire setup of the kernel promotes compiling in the core set of things your users will need and offering basically everything else as a module to load on demand.

  • Maybe it would be reasonable for sysadmins to proactively whitelist used / block all exotic unused modules that are not needed in their system configuration.

    This would reduce the amount of ring 0 code. But I've never seen such advice.

  • Because in order to exploit this, you have to have direct access to the computer. Either through malicious usb device, or by exploiting some supply chain or a known piece of software that will be willingly or automatically installed, and furthermore you need to be able to essentially run arbitrary terminal commands, which is a huge breach of isolation in that software.

    If an attacker manages to do all that, its already bad news for you. Escalation to root with this is the least of your worries at that point.

    Like someone else below posted, https://xkcd.com/1200/

    People need to understand what the vulnerability actually is before freaking out about it.

    • You are assuming that LPE only applies to the user that holds all the sensitive stuff. But it also applies to users created specifically for isolation. Without LPE they would not have access to anything important even if they were compromised.

    • So a threat actor buys access to a managed kubernetes service, or other linux-based shared hosting platform, and now they have access to the computer.

      Hell, GitHub Actions would do.

      2 replies →

  • > ... but is enabled by default?... why?

    We could also wonder why XZ was linked to SSH... But only on systemd-enabled distros (which is a lot of them).

    Just... Why?

    And then make sure to call to incompetence, instead of malice and say non-sense like "Sure, it only factually affects systemd distros, but this is totally not related to systemd". All I saw though was a systemd backdoor (sorry, exploit).

    Now regarding copy.fail that just happened: not all maintainers are irresponsible. And some have, rightfully, bragged that the security measures they preemptively took in their distros made them non vulnerable.

    But yup I agree it's madness. Just why. And Ubuntu is a really bad offender: it's as if they did a "yes | .." pipe to configure every single modules as an include directly in the kernel.

    "We take security seriously, look we've got the IPsec backdoor (sorry, exploit) modules directly in the kernel". "There's 'sec' in 'IPsec', so we're backdoored (sorry, secure)".

I'm not a security expert, but I'm responsible for some (relatively low-stakes) production systems.

It sounds like these two most recent exploits depend on unprivileged user namespaces, and that in fact a high percentage of LPE exploits need this feature. I use rootless containers on a couple of systems (like my dev machine server), but on most of my systems I don't, so it sounds like disabling that would be a good step to hardening my systems against future exploits.

To the security experts: are there any other straightforward configuration changes with such broad-reaching improvement in security posture? Any well-written guides on this subject, something like "top kernel modules to consider disabling if you don't need them"? I'm not talking about the obvious stuff like "disable password SSH", I'm specifically looking for steps that are statistically likely to prevent as-yet-unknown privilege escalation attacks.

After all these years, we finally have enough eyeballs that all bugs are shallow, and it kinda sucks. How many times a week am I going to be updating my kernel from now on?

  • I haven't updated mine. I have a firewall and it's not exposed to the Internet. Need a key to SSH in. Same with my public facing server. Almost none of these exploits are "drop everything now and patch" unless you are somehow exposing yourself stupidly.

    • If you’re running any sort of CI you’re probably going to have a bad couple of days if everything goes well

  • I sort of always expect there to be an LPE to root on Linux tbh, if anything this is great news and Linux might be a useful multiuser system after all.

  • With how things are going the question should be ‘is twice a day often enough?’

    • At the moment it doesn't seem to be.

      Within an hour of be advised of, and running the mitigation for DirtyFrag, my upstream provider has blocked all WHM/cPanel/SSH/FTP/SFTP access with a heads-up on:

      CVE-2026-29201 CVE-2026-29202 CVE-2026-29203

      which look like a repeat of CVE-2026-41940 a week ago.

  • So you think someone is going to break into your house, find your default credentials somehow and get root access?

    • I think when there’s a step change in our ability to find one type of vulnerability, other types of vulnerability are probably going to become more common as well. Let’s see where we stand at the end of the year.

    • With physical access, root access is as simple as setting init=/bin/bash in the kernel parameters from a bootloader. No need for credentials or anything.

      2 replies →

Perhaps we should consider designing distributions to be more tailored to specific purposes. Since no one needs the affected module on a desktop computer, distributions designed for that purpose should no longer include it by default. If this approach were consistently followed, significantly fewer systems would be vulnerable to such exploits. For most users a system with a kernel as minimalistic as the Android GKI kernel combined with sensible SELinux policies, would likely be sufficient.

  • Both of the modules are (also) for desktop/workstation use. Though AFS could probably be retired generally.

I'm curious what broke the embargo. Did it leak or did a third party find it independently?

  • No embargo exists (or could possibly exist) in the first place.

    Linux is open source, so every patch fixing the security bug is immediately visible to everyone. There is no workaround to that by the very design how the kernel is developed. The "embargo" people talking about is the rather stupid notion that if people keep their mouth shut and not write "THIS IS A LPE" straight in the patch description, everyone can pretend vulnerability is not leaked until the "official" message in the mailing list is sent.

    This approach might have been defensible before, but in LLM era, when people have automated pipelines feeding diffs straight from the mailing lists to SotA models asking to identify probable security issues fixed by those, it is both stupid and dangerous.

    • Linux does actually have a proper embargo process. But, you're correct that in this case it wouldn't usually have been followed anyway. Bugs like this are fixed multiple times a week, anyone with basic kernel knowledge can see that they are potentially LPEs.

      Usually, nobody even bothers to check. LPEs like this are too common to even categorise effectively.

    • My (novice) understanding is that embargoes are intended to provide time to 1) develop a patch and 2) distribute the patch.

      For Linux/public open source, what you said is right about 2). Once the patch is visible to anyone, it's trivial to identify exploits for unpatched systems. But 1) is still a valid use-case for embargoes for Linux vulns, right? Like, if this patch had taken a few weeks to develop before being confirmed working and published, that's potentially valid grounds for not sharing details during that time (within reason), no?

  • it was published publicly by an unrelated third party

    • They're asking the nature of the third party's discovery/publishing. Someone on the inside who decided to leak it anonymously? Someone else who was able to access some private communication they shouldn't have been able to see? Or a third party who happened to discover the same vulnerability (which seems less unlikely than normal since this is so similar to Copy Fail), but didn't follow disclosure procedures?

      7 replies →

Both of these (copy fail and dirtyfrag) exploit obscure socket address families. Are these filtered by commonly used seccomp profiles in eg docker (assuming seccomp can express it)?

  • At least in the k8s setup I looked at the dirtyfrag were filtered (by default).

    "XFRM SA registration requires CAP_NET_ADMIN".

Does anyone know whether Debian is vulnerable? I tried the exploit on a Debian 12+Debian 13 machine but wasn't able to reproduce it myself.

  • I was able to reproduce this issue on kernel 6.12.57+deb13-amd64 running Debian 13 (Trixie), but unable to reproduce it on kernel 6.1.0-42-amd64 running Debian 12 (Bookworm).

    For anyone not on the security stream of Debian packages for Bookworm, kernel version 6.1.0-42-amd64 is actually immune to copy.fail. Surprising that it looks to be immune to dirtyfrag. If you haven't already patched on the security stream, you can choose any kernel version that kept commit 2b8bbc64b5c2. I am thinking that the same commit might accidentally be keeping certain Debian 12 kernel versions safe from dirtyfrag as well.

  • I tested on a fully up-to-date Debian 13 and the exploit works. The mitigation also works / confirmed.

Ran as a fresh new default user in a ubuntu:latest container

  git clone https://github.com/V4bel/dirtyfrag.git && cd dirtyfrag && gcc -O0 -Wall -o exp exp.c -lutil && ./exp

Result:

  dirtyfrag: failed (rc=3)

Good news!

  • I got the same running it inside a container, but got a shell when running it directly in the host. This only shows that the exploit doesn't work inside a container. So, containers aren't vulnerable, or the script needs some adjustments to make it work in containers.

    Since copy fail can be used to escape containers (https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...), I'm guessing the exploit needs some changes only.

    • The repo you linked works by replacing files that are being used by other privileged containers on the same system. That works for the Kubernetes case (I'm a little surprised they don't use static binaries for their own privileged containers, seems a little dangerous to share any kind of data with untrusted tenants even if it's read-only) but not standalone containers.

      However, there is a much an easier way of doing a breakout -- you can corrupt the host runc binary in a way analogous to CVE-2019-5736. The next time a container is spawned, the host runc binary will get run as as root and that's that.

      Ironically, the first version of the protection against this attack I wrote also protected against page cache poisoning (by making a temporary copy of the runc binary during container setup in a sealed memfd and re-execing that) but the runtime cost of copying a 10MB binary at container startup was seen as too expensive by some users[1] so we ended up with a setup that shares the same page cache. I also distinctly remember arguing at the time that something like Dirty Cow could always happen in the future, and the memfd approach was better for that reason -- maybe I should've stuck to my guns more... :/

      In practice the solution for containers is to update your seccomp policy to block the vulnerable syscall.

      [1]: https://github.com/opencontainers/runc/issues/1980

  • Wouldn't count on container being a reliable testing platform for this. Loads of stuff - legitimate or otherwise - fails in containers

What’s interesting here is that the exploit chain itself isn’t especially novel anymore — page cache corruption has become a recurring pattern (Dirty Pipe, Copy Fail, Dirty Frag). The worrying part is how quickly public patches are now being reverse-engineered into weaponized exploits.

The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.

  • > The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.

    It never did. Trawling the Linux commit history is a tried and true method for finding n-days.

This again does not work under Android, at least in termux compiled with clang/gcc.

  • I assume because the rxrpc module is not loaded / provided and because unprivileged user namespaces are not allowed, which should be sufficient to mitigate. Curious if someone else has more details though.

  • The exploit as posted contains x86 shellcode, so you'd need to drop in the appropriate shellcode to test if it really works.

    Android wasn't vulnerable the last time, so far it's been a shining beacon of hope for proper SELinux configuration that I wish was more widely available in other places.

  • Android has a lot of hardening and sandboxing that desktop Linux doesn't (and won't for UX reasons).

  • Because Android is not Linux, as much as some pretend it is.

    In fact, given the official public APIs, Google could replace the Linux kernel with a BSD, and userspace wouldn't notice, other than rooted devices, and the OEMs themselves baking their Android distro.

    • It absolutely is Linux, and yes the JVM could absolutely run on something else. But it is Linux and you can run Linux binaries directly on it - that just isn’t how it is used by end users.

      12 replies →

If you don't need it (rootless containers), you can disable unprivileged userns to block these two:

  echo 1 | sudo tee /proc/sys/kernel/apparmor_restrict_unprivileged_userns

May also break sandboxes (e.g. browser) though.

We need an easy way to ensure that only kernel modules in an whitelist can load. I’m tired of blacklisting modules I never need.

So if I understand correctly 3 modules are involved:

- esp4 (kernel config "CONFIG_AF_RXRPC")

- esp6 (kernel config "CONFIG_INET_ESP")

- rxrpc (kernel config "CONFIG_INET6_ESP")

Is this correct?

Considering AWS just released patches for Copy Fail for Amazon Linux and Bottlerocket only yesterday.... I imagine it will over a week before we see patches for this. This is especially important to fix on Kubernetes nodes...does anyone have any recommendations for mitigating this issue before a patch is released?

Just got an email from one HPC I have access in Germany. I guess all HPCs ans services like GH Actions are going to be offline for a bit. I think last time was on a Friday too, so it might be another Friday to organize emails, files, rotate backup/passwords...

Disclosure Timeline

2026-04-29: Submitted detailed information about the rxrpc vulnerability and a weaponized exploit that achieves root privileges on Ubuntu to security@kernel.org.

2026-04-29: Submitted the patch for the rxrpc vulnerability to the netdev mailing list. Information about this issue was published publicly.

2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.

2026-05-07: Detailed information and the exploit for the esp vulnerability were published publicly by an unrelated third party, breaking the embargo.

2026-05-07: After obtaining agreement from distribution maintainers to fully disclose Dirty Frag, the entire Dirty Frag document was published.

  • 7 days from disclosure to publishing a how-to guide to get root to the entire planet doesn't scream "responsible" disclosure to me.

    • My immediate reaction was the same.

      But this is very similar to Copy Fail, and I'm assuming there was an assumption that others might also discover this soon as well. Hence the urgency.

      At least that's my charitable interpretation.

    • WTF cares? Publish them without disclosure is the true way, otherwise noone would care about security and your data.

The enforcement of read-only protection for pagecache pages (and the scatterlists and or other structures they point to) seems to be diffuse and incredibly fragile.

Tested Amazon Linux 2023 and it doesn't appear to be vulnerable in the default configuration. Would be interested if anyone finds anything different.

Do you think with modern LLMs in a few years projects like Linux will have all those low-hanging security bugs fixed? Are we witnessing a transition period, or will nothing change?

  • Out of this dataset of 2-3 vulnerabilities, I'm noticing a pattern: All of those are in older and/or niche kernel modules. That raises two thoughts:

    Maybe the more regularly used kernel code has a lot of low-hanging security topics shaken out of it already.

    And second, I'm indeed wondering what a good path to minimize the loadable kernel code is on a system looks like. My container hosts for example have a fairly well defined set of requirements, and IPSec certainly is not in there. So why not block everything solely made to support IPSec? I'm sure there is more than that.

    After all, the most reliable way to higher security is to do less things.

  • LLMs don't matter, linux's codebase has been growing much faster than it can be secured so this is all inevitable.

    Transitioning components to rust eliminates certain categories of bugs leaving the rest of the bugs to be dealt with.

    We'd likely end up needing another language with stronger type and effect systems to eliminate more categories of bugs. Probably something which enforces linear types, capabilities, units of measure types, and effects.

    And you'd have to update linux itself to switch to capabilities.

  • New vulns are introduced to Linux every day. Fuzzers trigger every single day on Linux. No, nothing will improve here from AI.

    • there's an argument to be made that new code will be inspected before being merged and therefore the classes of bugs an LLM is likely to find will not be merged until it's fixed.

this is why you don't contact distro mailing list. responsible disclosure is dead.

  • At present it looks to me like the embargo was broken by someone identifying the patch as fixing a vulnerability, not someone leaking the mailing list.

    More information may come out, or I might be missing something, but assuming that the above is accurate, this isn't a problem with responsible disclosure or mailing list opsec; it's a problem with the nature of open source. Right? Or are folks seriously proposing that the patch/mitigations should have been circulated to distro maintainers privately before going to mainline?

    • > Or are folks seriously proposing that the patch/mitigations should have been circulated to distro maintainers privately before going to mainline?

      I always assumed that distro maintainers got early access to patches before going mainline but maybe that’s not true?

Anyone here with experience providing multi-tenant Linux systems (CI and the like), do providers usually disable kernel modules they don’t need to eliminate attack surface? Every time one of these comes out I wonder if I should be rotating every key in my GitHub CI or PaaS host. So far I haven’t seen any reports from the providers I use that they were pwned by any of these exploits.

  • A lot of these multi-tenant CI systems actually run everything in microVMs even if they present it to you as a container.

    At this point, a microvm can be booted in ~200ms so you don't even have to keep a warm pool, you can just launch em on demand.

    GitHub CI (actions) uses virtual machines.

Well this is getting tiresome. I wish there was a less stressful way to get fixes for such bugs. But the cat is out of the bag now.

Not criticizing whoever found the bug, of course.

can this also be used to obtain container escape ?

  • If your container has setuid binaries and these modules are loaded, yes.

    • With the exploits published as-is, you'll only get root inside the container: there's no explicit namespace break, and calling setuid() in a container just gives you root in the container.

      However, it can be used to modify files that are passed into the container (e.g. Docker run -v), or files that are shared with other containers (e.g. other Docker containers sharing the same layers). kube-proxy with Kubernetes happens to share a trusted binary with containers by default, which is how it can be exploited: https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...

    • It's poisoning the filesystem cache, if you don't have a setuid binary handy you just poison anything else that gets executed by the host.

    • You don't need any setuid binaries. You could just as easily use the vulnerability to add a job to crontab(5) that causes the cron daemon to run whatever you want as root.

Testing the rxrpc vuln on aarch64, I get a kernel data abort, which is interesting. Not looked into the root cause yet!

Here's a general question, are these vulnerabilities hitting Linux more than BSDs due to hit being a larger target or because its architecture is less secure by design?

  • It’s two things. 1. Less eyes are on the bsds

    2. Bsds don’t have the same optimizations that Linux has. Bsds generally try to pursue corrrectness

    That being said there were just a bunch of vulnerabilities in freebsd

    macOS has had its own dirty cow attack and I know there’s for sure more memory ones just based on the way the xnu kernel works.

    So no Linux isn’t really worse per say

  • Larger target.

    • in many ways:

      - more people are using it (assuming macos is in its own bucket perhaps) - bigger surface areas (esp NetBSD has in my limited understanding just less stuff that can go boom) - more churn, ie more new stuff than can be buggy released more often.

      Of course, because of that, more eyes are on Linux, so I'm not sure where that security tradeoff is.

  • AFAIU, Linux and the BSDs have basically the same architecture - the BSDs just value secure and simple, understandable code more highly than Linux vs features and performance.

So umm... should I rush home and turn off all my computers?

Two distro independent LPEs in such a short time, if only all Linux software could be this portable.

Was the embargo ACTUALLY broken or is somebody just looking for attention?

  • >2026-05-07: After obtaining agreement from distribution maintainers to fully disclose Dirty Frag, the entire Dirty Frag document was published.

    you think the reporters and the distribution maintainers colluded to... get 5 minutes of attention?

    that would be exceptionally stupid of the distribution maintainers and destroy all trust.

Linux is a single user system and should be treated as such. Run your services as root. Don't rely on unix user primitives for security.

  • Running as root opens you up to a class of vulnerabilities (denial of service, mainly) that you can avoid by not running as root.

    That said, running every process in its own micro VM is looking more attractive by the minute.

    • Half the point is that you should always assume that there exists a complete LPE bug.

      But yes, micro VMs are a great idea!

  • I agree with the general sentiment. I treat anything running arbitrary machine code as if it has full access to a machine. I don't know where you get "run your services as root" from that, though. The principle of least privilege doesn't just apply to running malicious code, but running buggy code whose attack surface is exposed to evil-doers.

Every time someone finds a universal Linux privilege escalation, somewhere a sysadmin whispers 'this is why we don't run as root' while nervously checking if their containers are actually isolated.

  • This attack class lets you escalate from any user to UID 0. Not running as root won't save you, in fact, this attack is for those processes not running as root.

    However, if you are in a user namespace where UID 0 doesn't map to system-wide capabilities, and you dont share page cache for the setuid binaries on the system, this attack doesn't lead to LPE.

Where is the famous Linux is so much secure than Windows?

I would like to see the same hate comments about Linux than the ones we would see if it was a Windows vulnerability...