← Back to context

Comment by malfist

4 hours ago

[flagged]

It does if the person making the statement has a track record, proven expertise on the topic - and in this case… it actually may mean something to other people

  • Yes, as we all know that unsourced unsubstantiated statements are the best way to verify claims regarding engineering practices. Especially when said person has a financial stake in the outcomes of said claims.

    No conflict of interest here at all!

    • I have zero financial stake in Anthropic and more broadly my career is more threatened by LLM-assisted vulnerability research (something I do not personally do serious work on) than it is aided by it, but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".

      10 replies →

    • A security researcher claiming that they’re not skeptical about LLMs being able to do part of their job - where is the financial stake in that?

Nobody is right about everything, but tptacek's takes on software security are a good place to start.

  • I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing. A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it.

    There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they?

    From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems.

    Of course it works. Why would anybody think otherwise?

    You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

    Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.

    • > You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

      Yeah, that's just media reporting for you. As anyone who ever administered a bug bounty programme on regular sites (h1, bugcrowd, etc) can tell you, there was an absolute deluge of slop for years before LLMs came to the scene. It was just manual slop (by manual I mean running wapiti and c/p the reports to h1).

      4 replies →

    • > I was going to have to bring myself up to speed with LLMs

      What did you do beyond playing around with them?

      > Of course it works. Why would anybody think otherwise?

      Sam Altman is a liar. The folks pitching AI as an investment were previously flinging SPACs and crypto. (And can usually speak to anything technical about AI as competently as battery chemistry or Merkle trees.) Copilot and Siri overpromised and underdelivered. Vibe coders are mostly idiots.

      The bar for believability in AI is about as high as its frontier's actual achievements.

      3 replies →

> that means nothing to anybody else

Someone else here! Ptacek saying anything about security means a lot to this nobody.

To the point that I'm now going to take this seriously where before I couldn't see through the fluff.

How have you been here 12 years and not noticed where and how often the username tptacek comes up?

It might mean nothing to you, but tptacek's words means at least something to many of us here.

Also, he's a friend of someone I know & trust irl. But then again, who am I to you, but yet another anon on a web forum.