← Back to context

Comment by ragall

19 hours ago

> How do you know?

We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.

That's correlation, not causation.

It could equally be argued that the AI slop that's being produced makes for a lot more vulnerabilities being shipped. The bigger target makes for the easier discovery.

  • But don't we know that some of the vulnerabilities being discovered predate ai coding?

    • Certainly, and some discoveries have been attributed to AI (I was reading that mozilla firefox were praising mythos recently)

      But that's not accounting for all of the discoveries, not at all.

      I've also seen the npm people talking about the surge in AI code overwhelming the ability to properly review what's being distributed, and a large number of vulnerabilities being attributed to that

  • It's likely varies enormously between projects. Linux remains extremely low in slop, and the vulnerabilities being fixed are quite old, so it's improving. Many vibe coded projects are very sloppy, and are adding a lot of vulnerabilities.

    Total number of vulnerabilities likely goes up over time weighting all projects equally, but goes down over time weighting by usage.

    • Is there evidence serious vulnerabilities are the result of vibe coding already? I haven’t seen any so if you have some references, please share.

      1 reply →

    • I mean - you're spot on - which is why I'd be more inclined to ask for actual metrics rather than feels/vibes, and I'd be very clear that the information I was basing my thinking on has enormous pitfalls.

      This is the basis for "correlation points to possibly fertile grounds for an investigation"

  • > That's correlation, not causation.

    Pragmatically, correlation *is* evidence of causation in favour of the best explanation, until somebody finds a better explanation.

    > It could equally be argued that the AI slop that's being produced makes for a lot more vulnerabilities being shipped.

    This is also true, and does not exclude the other, because for the moment the vast majority of production software in the world (and therefore the bulk of enticing targets) was written before AI. If LLM software will become prevalent in commercial setups, then LLM-generated code will eventually become the majority of targets.

    • > Pragmatically, correlation is evidence of causation in favour of the best explanation, until somebody finds a better explanation.

      Uh, no.

      Correlation is only ever one thing - cause for investigation.

      Everything based on correlation alone is speculation.

      You can speculate all you like, I have zero issue with that, but that's best prefaced with "I guess"

      edit: Science captures this perfectly, and people misunderstand this so fundamentally that there is a massive debate where people who think they are "pro science" argue this so badly with theists that they completely hoist themselves with their own petard.

      Science uses the term "theory" because all of our understanding is based on "available data" - and science biggest contribution to humanity is that it accepts that the current/leading THEORY can and will be retracted if there is compelling data discovered that demonstrates a falsehood.

      So - because I know this is coming - yes science is willing to accept some correlation - BUT it's labelled "theory" or "statistically significant" because science is clear that if other data arises then that idea will need to be revisited.

      5 replies →