Comment by jpease
16 hours ago
Just to be contrarian, perhaps some measure of risk is reduced by the scale of one.
Identifying a vulnerability that can be exploited against many thousands or millions of targets is perhaps more attractive than a single one of individually low value.
This of course would assume that vulnerabilities are in fact unique (which is admittedly questionable).
To take this further, don't LLMs justify lowering the "barrier to attention"; i.e., if it only takes Claude's and not the hacker's eyeballs on the software, won't people find vulnerabilities in custom software for one too?
Besides that, one could easily imagine software created for similar purposes ("make me a file editor") by the same tool or handful thereof (claude and a very small "etc" for completeness) might share similar vulnerabilities, so this kind of broad net might be even cheaper to cast than one might imagine at first.
> This of course would assume that vulnerabilities are in fact unique (which is admittedly questionable).
Yeah, I don't think all that generated software will be as unique as people expect.
Considering it will be generated with the same LLMs that all share roughly the same training data we will se patterns of vulnerabilities will also be similar and so easily exploitable.
I had the exact same thought. Pretty low probability that there's going to be a script-kiddie exploit for your custom tools. Pretty decent probability that there will be vulnerabilities present if someone cares enough to target you.
The counterpoint to that is that the exact same tools that are allowing this personal software creation at massive scale are also excellent at black box vulnerability analysis…
There are entire vulnerability/fault/misdesign classes that are fairly general and appear to naturally emerge.
See e.g the lock screen gap that another commenter noted in a nearby thread.
But the exploits can use AI custom tools too. "Script Kiddie" is just now "Prompt Kiddie"
Although everyone might use their own flavor of "database" or "REST API", I can't imagine every layout to be unique enough to not have similar exploit classes entirely. AI isn't known for being super original after all...
Otoh, TAU will bound to get really personal now:D
We should expect the same automated personalization to be used offensively and for that personalization to be packaged into tools anyone can run (natural language interface, likely.)
(Appreciate your counterpoint for its own sake. It’s an interesting idea.)
If a vulnerability of the common not individualized ancestor software is found, how quickly do people patch their individual versions of the software?