Comment by tptacek

5 hours ago

I know some of the people involved here, and the general chatter around LLM-guided vulnerability discovery, and I am not at all skeptical about this.

[flagged]

  • It does if the person making the statement has a track record, proven expertise on the topic - and in this case… it actually may mean something to other people

  • Nobody is right about everything, but tptacek's takes on software security are a good place to start.

    • I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing. A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it.

      There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they?

      From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems.

      Of course it works. Why would anybody think otherwise?

      You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

      Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.

      9 replies →

  • Not sure why they flagged you. Your comment is as equally meaningless as the one you replied to.

  • > that means nothing to anybody else

    Someone else here! Ptacek saying anything about security means a lot to this nobody.

    To the point that I'm now going to take this seriously where before I couldn't see through the fluff.

  • How have you been here 12 years and not noticed where and how often the username tptacek comes up?

  • It might mean nothing to you, but tptacek's words means at least something to many of us here.

    Also, he's a friend of someone I know & trust irl. But then again, who am I to you, but yet another anon on a web forum.