Comment by tclancy

11 hours ago

So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.

Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.

TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.

  • I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.

    Rivers caught on fire for a hundred years before the EPA was formed.

  • New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.

    • The future may be distributed quite unevenly here, as they say, with a divergence between a small amount of "responsible" code in systems which leverage AI defensively, and a larger amount of vibe-coded / prompt-engineered code in systems which don't go through the extra trouble, and in fact create additional risk by cutting corners on human review. I personally know a lot of people using AI to create software faster, but none of them have created special security harnesses a la Mozilla (https://arstechnica.com/information-technology/2026/05/mozil...).

  • > we're entering a more hardened era of software

    This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.

    I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".

    • While agreeing, it also changes the mathematics of it: if a bad actor wants to hack me specifically now they have to write custom code that targets my software after figuring out what it _is_. This swaps the asymmetry around: instead of one bad actor writing an exploit for all the world (and those exploits being even harder to find), you have to hate me specifically.

      Admittedly, not hard to do, but it could save some other folks.

    • Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.

      Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

      For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.

      3 replies →

    • >there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did

      Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.

      5 replies →

  • You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?

    • Thank you for reminding us all that you AI bros are still the most obnoxious people there are.

I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.

  • Yeah.

    Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.

    Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.

    The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.

    • Well this argument was certainly inventive. What a weird impression to have about these things.

      Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?

      Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.

Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.

New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.

  • This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.

    * with internet access to FOSS via sourceforge and github we got an abundance of building blocks

    * with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use

    Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.

    This may all end well ultimately, but we're definitely in for a bumpy ride.

This assumes that there are no new exploits being generated.

We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?

The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.

We need to solve the underlying problem: how to sustainably develop and maintain the software we need.

A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.

Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.

Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?

  • This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.

    I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.

    • Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?

Will need those animal bones if all the industrial control systems get turned against us

Nuclear might be airgapped but what about water, power…?

What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.

> any useful piece of software has been fuzz tested, property tested and formally verified.

That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.

  • Isn't blaming AI for that similar to blaming C for buffer overflows?

    More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.

    And in the end it is a problem of processes and culture.

    • We are not in disagreement here. I'm not blaming AI, I'm blaming the culture around its use.