Comment by jerf
6 days ago
At the moment, I would call "writing secure code that can be put on the internet" to be a super-human task. That is, even our most highly skilled human beings currently can't be blindly trusted to accomplish it; it requires review by teams of experts. We already don't even trust humans, so trusting AIs for the forseeable future (as much as "the forseeable future" may be contracting on us) is not something we should be doing.
And so as to avoid the reader binning this post into "oh just some human triumphalist AI denier", remember I just said I don't trust individual humans on this point either. Everyone, even experts at coding secure code, should be reviewed by other experts at this point.
I suspect this is going to prove to be something that LLMs can't do reliably, by their architecture. It's going to be a next-generation AI thing, whatever that may prove to be.
Agreed. Security is a task that not even a group of humans can perform with upmost scrutiny or perfection. 'Eternal vigilance is the price of liberty' and such. People want to move fast and break things without the backing infrastructure/maintenance (like... actually checking what the AI wrote).
Ah yes... Move face and break things. Well Facebook didn't overpromise on that one...