Comment by tptacek
16 hours ago
I don't follow. LLMs spotted these bugs in the first place. You seem to be saying that these discoveries are indications that they're bad for vulnerability discovery.
16 hours ago
I don't follow. LLMs spotted these bugs in the first place. You seem to be saying that these discoveries are indications that they're bad for vulnerability discovery.
From what I understand, the copy fail bug was found by researcher who noticed something weird and then using AI to scan the codebase for instances where that becomes a problem.
I bet that with a slightly looser prompt/harness, the LLM could have found these twin bugs too.
Yet at the same time, I also think that if the human researcher had manually scanned the code, he'd have noticed these bugs too.
FWIW I do think LLMs are great tools for finding vulnerabilities in general. Just that they were visibly not optimally applied in this case.
They could also have found all these things at the same time - and are slow-rolling the disclosures.
I don't think the copy.fail people understood the issue they found, as is evident by the heavy focus on AF_ALG/aead_algif, which is essentially "innocent" as we're seeing here.
I think LLMs are great for vulnerability discovery, but you need to not skimp on the legwork and understanding what even you just found there.
Right but without the LLM the bug doesn't get found at all.
That's not necessarily true. Who's to say the security researchers wouldn't have found it if they'd searched the code manually?
4 replies →
Safer to assume at least one of NSA, Mosad and a few others were sitting on it for years.
Yes, I agree. I'm not the GP poster.
No, they did not. Careful of falling for the psychosis.
> This finding was AI-assisted, but began with an insight from Theori researcher Taeyang Lee, who was studying how the Linux crypto subsystem interacts with page-cache-backed data.
https://xint.io/blog/copy-fail-linux-distributions
Theori is an AI security research firm.
You appear to want to die on the hill of "This vulnerability would never have been found if we lived in a world without LLM AI" which is a very strange hill to die on.
There's no question that we live in the world where LLM AI was involved in finding the copy fail vulnerability at this specific time, and it's completely normal for people to see a vulnerability and then look closer and find related vulnerabilities or a deeper root cause, but there's no need to adopt an extreme "without AI LLM we don't find these vulnerabilities" position.
2 replies →
It seems as though this issue occurred to him, then he used their tool ("Xint Code") to analyze the codebase for instances of it.
I don’t think that’s what the OP is saying at all, just that using LLMs needs to be a cooperative research process.
Also I see you jumping around a lot to the defense of LLMs when I don’t think anyone is really attacking them. Maybe cool it a bit.
From the thread that ensued I feel comfortable that my interpretation of the comment (or rather, my confusion about it) was in fact germane.
Germane or not the knee-jerk reactions related to LLMs are getting ridiculous and it seems like it’s the same people throwing down at a moments notice and then chalking it up to a misunderstanding.
So like I said, just chill out.
It’s incredible humans spot stuff like this. I guess even more incredible that LLMs can do it!
Right. Finding the bug is in itself a win. It seems we’re jumping from that spend-electricity-to-find-bugs win to arguing about how some things around it are not quite good or comfy.