This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.”
This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.
And yes, they still thought they were doing the right thing.
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Capabilities have a lot of serious design problems which is why no mainstream language has them. Because this comes up so often on HN I wrote an essay explaining the issues here:
But as pointed out by others, this particular exploit wouldn't be stopped by capabilities. Nor would it be stopped by micro-kernels. The filesystem is a trusted entity on any OS design I'm familiar with as it's what holds the core metadata about what components have what permissions. If you can exploit the filesystem code, you can trivially obtain any permission. That the code runs outside of the CPU's supervisor mode means nothing.
The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
Note that capabilities would not help for those bugs we are discussing today.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
The billions of burgers served by fast food franchises with long histories of poisoning people would argue that delicious convenience overrides the hygiene instinct.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
This replaces supply chain trust with the trust in the LLM and the provider you're using. Even if you exclude model devs from your threat model and are running the LLM yourself, it's still an uninterpretable black box that is trained on the web data which can be and is manipulated precisely to attack LLMs during training. So this approach still needs proper supply chain security.
There are a lot of libs you really can't justify implementing from scratch. Mathjs and node-mysql jump to mind. Poisoned chains build up from small dependencies, and clearly staying on top of your dependency chain should be a full time job - if anyone was willing to pay someone to do that full time.
I am feeling really uncomfortable sitting on a large React project.
Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.
Realistically, most folks don't get paid to mitigate long term risks by deviation from the common (and more efficient) practice.
Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.
I am feasting on Schadenfreude as SWEs industry grapples with the messes it made and an uncertain employability in the near future; AI is not 30 years away like when I started.
All the arrogant asocial coder bros cast aside.
All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...
Watching AI come around and the 2010s messes blow up in faces... chefs kiss
Considering the amount of money at stake, Software is a deeply, deeply unserious and careless industry, and a great many practitioners are also deeply unserious and careless people. Yet, somehow the world goes on, these companies siphon up money, and all harms they cause are externalized.
IT is (was?) one of the very few ways for us in third-world countries to pull ourselves out of poverty by our own bootstraps, since social mobility is quite limited if you lack the right connections. I'm pleased with you being so happy about it being taken away to make more money for billionaires.
My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.
The industry hasn't grown out of OOP. Go look at any major production codebase businesses rely on and it's fully of objects and classes, including new codebases made very recently.
Package managers aren't going anywhere. Even languages that historically bet on large standard libraries have been giving up on that over time (e.g. Java's stdlib comes with XML support but not JSON).
Unfortunately, LLMs are also not cheap enough to just create whole new PL ecosystems from scratch. So we have to focus on the lowest hanging fruits here. That means making sandboxing and containers far more available and easy for developers. Nobody should run "npm install" outside a sandbox.
I think what we have to start accepting even security experts is that our world is incredibly fragile. I think people realy understimate this. And I do not mean just the IT world but the entire world is built on many incredibly fragile balances. Security Exploits will always exist. Not just in software but in real life. Heck someone managed to Sneak into a Security Conference. And that guy was a random youtuber. Granted that was not like a high security thing. But thats just an example I had of the top of my head. Basically it is realy easy to circumvent security in most cases.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
I remember there was a trend with some UK Influencers using some "Ladder and a High-vis" tricks to enter places for a while to show how rough physical security is [0]. I believe its the youtuber, Max Fosh, who managed to do it back to back at the International Security Expo, first in the UK [1] and then in the US [2], with the fake names 'Rob Banks' and 'Nick Everything'.
I've studied security culture before and in most cases everything comes down to a sliding scale with security on one side and convenience/accessibility on the other, the more secure something is, the less accessible it is and vice versa.
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.
I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.
A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.
Every dependency compromise that I can remember "in the past few months" were discovered in hours, if not minutes (litllm, axios, bitwarden CLI, Checkmarx docker images, Pytorch lightning, intercom/intercom-php). What's more, the discovery of these compromises did not at all rely on whether the compromises were actively used.
That's why I don't understand:
> If everyone starts waiting a week, their exploits will wait 2 weeks
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
Linux Kernel doesn’t differentiate between security bugs and other bugs, which is the main complaint here I think. They have the same process.
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
Actively destructive opinion article. I could not begin to understand the rationale.
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
The patches for the latest vulnerabilities aren’t even out yet. So it would be a real bad time for a new supply chain attack since it would get root on pretty much every system.
At some point, some people will rebuild an entire stack (all layers, from OS to applications) with proof carrying code upgrades. Proof-code co-design and co-construction is the only way to execute code that you can trust.
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
Literally implemented PR guards today to prevent the team merging any dependencies that didn’t have explicit versions pinned (and that matched the resolution in the lock file).
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
What’s interesting here is that the exploit chain itself isn’t especially novel anymore — page cache corruption has become a recurring pattern (Dirty Pipe, Copy Fail, Dirty Frag). The worrying part is how quickly public patches are now being reverse-engineered into weaponized exploits.
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.
I am worried that the sluggishness appeared about the same time on both devices
For ios, rebooting your phone is extremely effective at removing exploits. The boot chain attestation stuff can verify the system is in a known state. If you are ultra paranoid you could enable lockdown mode which preemptively disables the entrypoints for exploits. So far I don't believe there has been any exploit which works with lockdown mode enabled.
In this case, no insiders broke the embargo. It was reverse engineered from the patch by an unrelated third party and a proof of concept immediately came out of it. At that point, it's kinda fair game.
I assume that while Mythos may be really good at finding vulnerabilities, lighter models may still do a pretty good job of explaining/exploiting the vulnerability if given the patch which fixes it.
Less a gentleman's agreement and more of a question of economic incentives going away. Companies aren't paying out bounties at the rates they used to (possibly because they've realized there's little financial incentive to do so for most findings) and simultaneously they're being inundated with AI slop findings that somehow have to still be triaged and evaluated.
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit:
Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
With copy.fail the security patch wasn't listed as such so there wasn't a lot of attention on the issue as it remained dormant in most kernels for a while.
I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.
Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.
I wonder whether there is any tool that can prevent npm from downloading any package that has been published in the last month. While I miss out on possible fixes, this would prevent downloading some 3rd level dep that takes over my machine.
NPM seems to have introduced the flag `minimumReleaseAge` for this exact purpose. However even though are many recent references to it[0][1][2] I don't see it anywhere in the NPM documentation.
The whole (mistaken) belief that Linux and macOS didn't require AV was based on the execute bit being present, something Microsoft fixed back in XP by making downloaded files as such and preventing them from being opened trivially.
If you have code execution, you can attack the OS.
Indeed, when one installs dependencies all over the Internet, or even better, key projects use "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" as default suggestion on how to install them, attackers have the work done for them.
Problem here is Brew does things in an anti-unix way by default, the auto updating of packages being the prominent reason.
I personally switched away from macOS with this being one of the reasons, after having realized brew will eventually compromise my system with their antics.
To mitigate supply chain attacks like this, I've taken to specifying exact versions in my Rust cargo.toml, and when importing new crates, select the previous-to-latest version. Is this a reasonable mitigation? It bugs me that Swift deprecates the concept of specifying exact versions, it actively pushes you towards semver which leaves the door open to this.
Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.
Except that a lot of software likely is already broken in fun ways we currently don't know about. That is what makes it such a "fun" challenge. Supply chain attacks are one thing, but CVEs in already released software allowing other attackers are another.
As always, I know most of us work in IT, but things rarely are actually binary.
You don't need a kernel LPE to root a Linux developer machine.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
I agree that de facto the biggest security flaw in Linux is "okay I'm tired of getting interrupted all day assisting you, I know you're competent, I'll put you on the sudoers list."
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
> this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
right, a bigger issue is multitenant systems, which are common in academia (I manage several such systems for various experiments). Now, we generally trust the users to not be malicious, but most don't get sudo, because physicists tend to think they know what they're doing when they don't really (except for me, of course).
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
Personally I'm choosing to keep my home server behind a VPN and to enable Lockdown Mode on my phone and laptop for a while until the dust settles. As well as just limiting the software installed to trusted projects only.
VM isolation would still be safe even with these kernel exploits.
Of course, why didn't anyone think of that ? I bet if someone started to ship software that has no errors they'll make a huge amount of money, especially from all the people that are security-minded !
you raise a really good point. if everyone is doing this at exactly the same lag then it will eventually start hitting groups in sync at the exact same time
Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.
I'm personally waiting to be downgraded to simply being called "lazy".
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.
I know this is unrelated to the article, but related to the title.
If this is still the same install that you've been using since 38, you might find a clean install resolves some issues (whether or not your upgrade got botched). Also helps me get rid of software I installed that I don't use anymore, which I feel is relevant to this article. But part of why I love Silverblue so much is I don't have to worry about upgrades getting botched and fwiw as well, I haven't noticed any of those bugs on 44 across several very different machines.
I had a day 1 crashloop with KWin on the 2nd desktop, but on day 2 some package update fixed it. Honestly it isn't the first time Fedora upgrades have messed something up for me either but I do think it's more stable than the average Ubuntu release, not that I've upgraded ubuntu in like 5 yrs.
Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
LLMs have been used to scan binary blobs for exploits already. What would be more effective is a system designed with multiple layers of security so any one exploit is largely useless.
This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.
And yes, they still thought they were doing the right thing.
2 replies →
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
4 replies →
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
18 replies →
Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
1 reply →
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
8 replies →
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
1 reply →
Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
1 reply →
What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
2 replies →
Will need those animal bones if all the industrial control systems get turned against us
Nuclear might be airgapped but what about water, power…?
I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Capabilities have a lot of serious design problems which is why no mainstream language has them. Because this comes up so often on HN I wrote an essay explaining the issues here:
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
But as pointed out by others, this particular exploit wouldn't be stopped by capabilities. Nor would it be stopped by micro-kernels. The filesystem is a trusted entity on any OS design I'm familiar with as it's what holds the core metadata about what components have what permissions. If you can exploit the filesystem code, you can trivially obtain any permission. That the code runs outside of the CPU's supervisor mode means nothing.
The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
Note that capabilities would not help for those bugs we are discussing today.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
1 reply →
Have you heard of pledge in OpenBSD?
I prefer it’s model of declaring this is what I want to use, any calls to code outside that error out.
2 replies →
Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
The billions of burgers served by fast food franchises with long histories of poisoning people would argue that delicious convenience overrides the hygiene instinct.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
> They don't wait for the microbial cultures to come back positive to say no.
They dont wait for the cultures to come back negative to say yes either. They just eat what they are served.
Most people start out as kids that does exactly that.
That means going back to disabling Javascript or only allowing widely used, well-maintained Javascript libraries.
1 reply →
Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
This replaces supply chain trust with the trust in the LLM and the provider you're using. Even if you exclude model devs from your threat model and are running the LLM yourself, it's still an uninterpretable black box that is trained on the web data which can be and is manipulated precisely to attack LLMs during training. So this approach still needs proper supply chain security.
There are a lot of libs you really can't justify implementing from scratch. Mathjs and node-mysql jump to mind. Poisoned chains build up from small dependencies, and clearly staying on top of your dependency chain should be a full time job - if anyone was willing to pay someone to do that full time.
I am feeling really uncomfortable sitting on a large React project.
Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.
Not enjoying npm at all.
Right, yeah, instead you can run ancient versions of everything and encounter a whole different class of risks
That's not at all what OP is talking about.
Realistically, most folks don't get paid to mitigate long term risks by deviation from the common (and more efficient) practice.
Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.
I am so happy to go through another round of kernel RPMs after the freak out today!
I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.
Was that good enough? Oh no.
Here we go again!
Fortunately the issue isn’t fixed yet, so you don’t have to :)
Thinks might have to start considering server side technologies a bit more if at least being mindful of build processes.
It's not just client-side npm though. Rust has the same problem.
Edit: and, ofc, what we're discussing here is Linux packages.
I am feasting on Schadenfreude as SWEs industry grapples with the messes it made and an uncertain employability in the near future; AI is not 30 years away like when I started.
All the arrogant asocial coder bros cast aside.
All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...
Watching AI come around and the 2010s messes blow up in faces... chefs kiss
Hey it's all web-scale though! Good job!
Considering the amount of money at stake, Software is a deeply, deeply unserious and careless industry, and a great many practitioners are also deeply unserious and careless people. Yet, somehow the world goes on, these companies siphon up money, and all harms they cause are externalized.
2 replies →
IT is (was?) one of the very few ways for us in third-world countries to pull ourselves out of poverty by our own bootstraps, since social mobility is quite limited if you lack the right connections. I'm pleased with you being so happy about it being taken away to make more money for billionaires.
My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.
The industry hasn't grown out of OOP. Go look at any major production codebase businesses rely on and it's fully of objects and classes, including new codebases made very recently.
Package managers aren't going anywhere. Even languages that historically bet on large standard libraries have been giving up on that over time (e.g. Java's stdlib comes with XML support but not JSON).
Unfortunately, LLMs are also not cheap enough to just create whole new PL ecosystems from scratch. So we have to focus on the lowest hanging fruits here. That means making sandboxing and containers far more available and easy for developers. Nobody should run "npm install" outside a sandbox.
Rust is quite bad on this, having to rely on external crates for error handling or macros is even worse than what async runtime to pick up.
Yes, I mean crates like anyerror and syn.
But you can't expect the language std to supply you with every package under the sun.
5 replies →
Or disable algif_aead module as in https://news.ycombinator.com/item?id=47957409
I think what we have to start accepting even security experts is that our world is incredibly fragile. I think people realy understimate this. And I do not mean just the IT world but the entire world is built on many incredibly fragile balances. Security Exploits will always exist. Not just in software but in real life. Heck someone managed to Sneak into a Security Conference. And that guy was a random youtuber. Granted that was not like a high security thing. But thats just an example I had of the top of my head. Basically it is realy easy to circumvent security in most cases.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
I remember there was a trend with some UK Influencers using some "Ladder and a High-vis" tricks to enter places for a while to show how rough physical security is [0]. I believe its the youtuber, Max Fosh, who managed to do it back to back at the International Security Expo, first in the UK [1] and then in the US [2], with the fake names 'Rob Banks' and 'Nick Everything'.
I've studied security culture before and in most cases everything comes down to a sliding scale with security on one side and convenience/accessibility on the other, the more secure something is, the less accessible it is and vice versa.
[0] https://www.youtube.com/watch?v=LTI0SeyhAPA
[1] https://www.youtube.com/watch?v=qM3imMiERdU
[2] https://www.youtube.com/watch?v=NmgLwxK8TvA
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.
Yep, that was my intent.
1 reply →
Yeah, Stuxnet was dormant for a year until execution.
I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.
I think it's even more specific.
From TFA:
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.
A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.
Every dependency compromise that I can remember "in the past few months" were discovered in hours, if not minutes (litllm, axios, bitwarden CLI, Checkmarx docker images, Pytorch lightning, intercom/intercom-php). What's more, the discovery of these compromises did not at all rely on whether the compromises were actively used.
That's why I don't understand:
> If everyone starts waiting a week, their exploits will wait 2 weeks
This is why cooldowns have space for patches.
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
[0]: https://www.maxchernoff.ca/p/luatex-vulnerabilities#timeline
Linux Kernel doesn’t differentiate between security bugs and other bugs, which is the main complaint here I think. They have the same process.
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
3 replies →
I haven't switched to BSD but I've been thinking about it for a while. I just saw Vultr has both FreeBSD and OpenBSD!
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat. It's a speed bump, not a brick wall.
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
1 reply →
Is there anywhere that provides a good overview of the various OS protection technologies/approaches that exist and which OSes have implemented them?
So you have one example in hand and trash talked FreeBSD’s entire security team. Bold claims are fine but this is lazy.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
3 replies →
FreeBSD is quite lax when it comes to security- especially defaults and configs.
The preference is for usability over security.
Famously: https://vez.mrsk.me/freebsd-defaults
I appreciate your work on the project, but I can’t in good conscience suggest people switch while are such bad defaults.
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
Calling FreeBSD "just a distro" is verging on insulting. It's an operating system.
Well, as they're a FreeBSD dev, I would be surprised if they pointed anyone in a different direction.
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
1 reply →
FreeBSD is not a distro
3 replies →
Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)
Also funny they never show Debian in those tests/videos.
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
4 replies →
How so?
3 replies →
Only to be thrown out of the windows with a plain "curl | sh".
While I am sure FreeBSD is more secure than your average Linux distro, I sure hope they are using these new AI models to harden everything.
Has everyone here already forgotten about the WireGuard tire fire?
https://news.ycombinator.com/item?id=26507507
tl;dr: deeply insecure WireGuard implementation committed directly into the FreeBSD kernel with zero review.
Was this process problem fixed?
FreeBSD just slaps at the problem. OpenBSD solves it.
I kid, I kid...
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
More a case for something like this from Show HN three months ago
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?
3 replies →
Even better, only use company vetted repos, everyone is forbidded to install directly from the Internet repos.
This naturally doesn't work outside corporations.
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
1 reply →
At least with our Renovate config, all dependencies have a 7 day cooldown, but marked security updates are immediate.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
You could still have security bumps happening (like dependabot).
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
You'll also find your CI build times and flakey failures can be cut down massively by doing this.
Additionally, use only internal repos.
Actively destructive opinion article. I could not begin to understand the rationale.
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
OP is suggesting that a supply chain attack would be bad now, and to reduce that risk by not installing/updating NPM packages.
Can someone help me understand the copyfail thing and how it relates to NPM packages?
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
The patches for the latest vulnerabilities aren’t even out yet. So it would be a real bad time for a new supply chain attack since it would get root on pretty much every system.
NPM supply-chain attacks spread really quickly.
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
> And the advice isn't just "update your kernel" because we are still finding new related issues?
The advice isn't just "update your kernel" because there is no update. The latest vulnerability (the one discovered after copy.fail) still has no fix.
npm can run on linux.
This applies to much more than just software, in fact it applies to almost everything.
I don't remember where I read it, but it basically boils down to need vs want.
I've used that rule for deciding between a new car or used. A fancy vacuum or basic.
A shiny new gadget.
Bringing new things into the tech stack.
Picking a new tech stack.
At some point, some people will rebuild an entire stack (all layers, from OS to applications) with proof carrying code upgrades. Proof-code co-design and co-construction is the only way to execute code that you can trust.
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
Literally implemented PR guards today to prevent the team merging any dependencies that didn’t have explicit versions pinned (and that matched the resolution in the lock file).
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
What’s interesting here is that the exploit chain itself isn’t especially novel anymore — page cache corruption has become a recurring pattern (Dirty Pipe, Copy Fail, Dirty Frag). The worrying part is how quickly public patches are now being reverse-engineered into weaponized exploits.
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.
I am worried that the sluggishness appeared about the same time on both devices
For ios, rebooting your phone is extremely effective at removing exploits. The boot chain attestation stuff can verify the system is in a known state. If you are ultra paranoid you could enable lockdown mode which preemptively disables the entrypoints for exploits. So far I don't believe there has been any exploit which works with lockdown mode enabled.
If you are already exploited though, I doubt it helps
2 replies →
the lottery of either getting a new supply-chain attack or the fixes from Mythos with every single update
It really pisses me off that responsible disclosure timelines are being ignored.
In this case, no insiders broke the embargo. It was reverse engineered from the patch by an unrelated third party and a proof of concept immediately came out of it. At that point, it's kinda fair game.
I assume that while Mythos may be really good at finding vulnerabilities, lighter models may still do a pretty good job of explaining/exploiting the vulnerability if given the patch which fixes it.
if you don't already consider responsible disclosure a quaint idea, you may want to grow warm on it
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
Less a gentleman's agreement and more of a question of economic incentives going away. Companies aren't paying out bounties at the rates they used to (possibly because they've realized there's little financial incentive to do so for most findings) and simultaneously they're being inundated with AI slop findings that somehow have to still be triaged and evaluated.
[flagged]
3 replies →
The dirty frag repo says:
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
> Due to external factors, the embargo has been broken, so no patch exists for any distribution.
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
1 reply →
If the fix commit is public, so is the issue being fixed.
With copy.fail the security patch wasn't listed as such so there wasn't a lot of attention on the issue as it remained dormant in most kernels for a while.
I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.
Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.
Maybe you should install new kernels at least though.
I wonder whether there is any tool that can prevent npm from downloading any package that has been published in the last month. While I miss out on possible fixes, this would prevent downloading some 3rd level dep that takes over my machine.
NPM seems to have introduced the flag `minimumReleaseAge` for this exact purpose. However even though are many recent references to it[0][1][2] I don't see it anywhere in the NPM documentation.
[0] https://socket.dev/blog/npm-introduces-minimumreleaseage-and...
pnpm has this, I think others may also have something similar.
https://pnpm.io/settings#minimumreleaseage
pnpm has added a new setting, minimumReleaseAge, enabled by default, just to try to mitigate these issues.
Alternatively, consider using Qubes OS, which isolates untrusted software using strong hardware virtualization. My daily driver, can't recommend it enough. Examples of usage patterns: https://doc.qubes-os.org/en/r4.3/user/how-to-guides/how-to-o...
Remember the whole discussion when UNIX was supposed to not need anti-virus and talking down PCs?
Behaviours matter more than OS security primitives.
The whole (mistaken) belief that Linux and macOS didn't require AV was based on the execute bit being present, something Microsoft fixed back in XP by making downloaded files as such and preventing them from being opened trivially.
If you have code execution, you can attack the OS.
Indeed, when one installs dependencies all over the Internet, or even better, key projects use "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" as default suggestion on how to install them, attackers have the work done for them.
I got rid of half of my VSCode extensions a couple days ago, its too risky.
Those things scare the crap out of me…
Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
The post is about Linux vulnerabilities, but given the recent supply chain attacks, I'd be especially careful with Homebrew: https://x.com/i/status/2052106143271354859
Often convenience and security are at odds, but `export HOMEBREW_NO_AUTO_UPDATE=1` is more convenient and more secure.
Problem here is Brew does things in an anti-unix way by default, the auto updating of packages being the prominent reason.
I personally switched away from macOS with this being one of the reasons, after having realized brew will eventually compromise my system with their antics.
To mitigate supply chain attacks like this, I've taken to specifying exact versions in my Rust cargo.toml, and when importing new crates, select the previous-to-latest version. Is this a reasonable mitigation? It bugs me that Swift deprecates the concept of specifying exact versions, it actively pushes you towards semver which leaves the door open to this.
Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.
The proof of concept code is out before patches are available for any distro.
The scary part is how many teams still have builds implicitly depending on “whatever was latest 5 minutes ago”.
Containerization improved reproducibility in some ways, but in practice a lot of CI pipelines still behave like live dependency roulette.
"If it ain't broke, don't fix it" is its own area of risk that people often ignore
Except that a lot of software likely is already broken in fun ways we currently don't know about. That is what makes it such a "fun" challenge. Supply chain attacks are one thing, but CVEs in already released software allowing other attackers are another.
As always, I know most of us work in IT, but things rarely are actually binary.
You don't need a kernel LPE to root a Linux developer machine.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
I doubt your “distroless” container is any safer for this vulnerability .
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
I agree that de facto the biggest security flaw in Linux is "okay I'm tired of getting interrupted all day assisting you, I know you're competent, I'll put you on the sudoers list."
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
1 reply →
> this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
4 replies →
right, a bigger issue is multitenant systems, which are common in academia (I manage several such systems for various experiments). Now, we generally trust the users to not be malicious, but most don't get sudo, because physicists tend to think they know what they're doing when they don't really (except for me, of course).
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
Perhaps, but it makes a huge difference if you're running the vulnerable code in a container or as a different user.
> Copy Fail 2: Electric Boogaloo
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
"we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2". Problem solved
It seems like this round of vulns is going to be significant. What is the right response?
Personally I'm choosing to keep my home server behind a VPN and to enable Lockdown Mode on my phone and laptop for a while until the dust settles. As well as just limiting the software installed to trusted projects only.
VM isolation would still be safe even with these kernel exploits.
Maybe the new software should not have any errors. I know, I have higher expectations than the average commercial software customer.
Of course, why didn't anyone think of that ? I bet if someone started to ship software that has no errors they'll make a huge amount of money, especially from all the people that are security-minded !
Please grow a brain.
I still can’t believe people are ok with software updates every day. Looking at you Claude code
It's a two-edged sword. You're damned if you do and damned if you don't update.
I've been doing alot of that lately
I dislike FUD like this :/
I do a bit wonder what happens as standard practice becomes to lag more and more and more. Who is there left that's looking, that'd finding out?
I think there’s already a big market of supply chain security companies that are proactively scanning dependencies for this sort of thing.
They’re always racing to be the first one to write an article about a case.
you raise a really good point. if everyone is doing this at exactly the same lag then it will eventually start hitting groups in sync at the exact same time
100% doing this, sadly
Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.
I'm personally waiting to be downgraded to simply being called "lazy".
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
slopcode is a pejorative that means nothing to me. if you have an actual criticism to make, then do it
[dead]
[flagged]
[dead]
[flagged]
? This is related to a vulnerability that was introduced to the Linux kernel in 2017.
What?
Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.
I know this is unrelated to the article, but related to the title.
I have had none of those issues on Fedora 44, FWIW.
ditto. my upgrade from 43 - 44 went very smooth
If this is still the same install that you've been using since 38, you might find a clean install resolves some issues (whether or not your upgrade got botched). Also helps me get rid of software I installed that I don't use anymore, which I feel is relevant to this article. But part of why I love Silverblue so much is I don't have to worry about upgrades getting botched and fwiw as well, I haven't noticed any of those bugs on 44 across several very different machines.
I had a day 1 crashloop with KWin on the 2nd desktop, but on day 2 some package update fixed it. Honestly it isn't the first time Fedora upgrades have messed something up for me either but I do think it's more stable than the average Ubuntu release, not that I've upgraded ubuntu in like 5 yrs.
Fedora 44 here, no issues.
Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
So no external libraries for anything? Billions of lines of code that duplicate the same thing n-times across an organization?
And the benefit is the obscurity of "no one will know how to exploit them"?
No, thanks.
LLMs have been used to scan binary blobs for exploits already. What would be more effective is a system designed with multiple layers of security so any one exploit is largely useless.
Next: the back doors are written by the LLM!