← Back to context

Comment by tptacek

18 hours ago

Theori is an AI security research firm.

You appear to want to die on the hill of "This vulnerability would never have been found if we lived in a world without LLM AI" which is a very strange hill to die on.

There's no question that we live in the world where LLM AI was involved in finding the copy fail vulnerability at this specific time, and it's completely normal for people to see a vulnerability and then look closer and find related vulnerabilities or a deeper root cause, but there's no need to adopt an extreme "without AI LLM we don't find these vulnerabilities" position.

  • It's weird to say I want to "die on this hill" because that's not even something I believe. There was nothing especially difficult about this particular vulnerability. My only observation that nobody did find it before, then an LLM security firm went out looking for Linux LPEs, and thus it was discovered.

    That is a very difficult fact pattern to which to attach the conclusion "LLMs have sabotaged security research" (my paraphrase).

    • Well.. every new vulnerability is one nobody did find it before.

      Otherwise, it won't be classified as "new"

      --

      Edit:

      I think LLM is very useful here.

      When a researcher spot something funny, instead of spending two days on reading and testing, he can fire up a LLM and have it read all the code lead to there in ~30 minutes.

    • The finding started with human intuition and was assisted by an LLM. You can yell "AI sec firm" 1000 times. A human got it started. You shouldn't die on that hill.

It seems as though this issue occurred to him, then he used their tool ("Xint Code") to analyze the codebase for instances of it.