Comment by Filligree

6 hours ago

Anthropic has not in fact released it, and it does in fact appear to be that dangerous, judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg.

Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.

The flood of reports that open source projects like curl, Linux and Chromium are getting are presumably due to public models like Open 4.6 that released earlier this year, and not models with limited availability.

> judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg

Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?

Those vulnerabilities were found by open models as well.

  • Not really. The models were pointed specifically at the location of the vulnerability and given some extra guidance. That's an easier problem than simply being pointed at the entire code base.

    • Surely the Anthropic model also only looked at one chunk of code at a time. Cannot fit the entire code base into context. So supplying an identical chunk size (per file, function, whatever) and seeing if the open source model can find anything seems fair. Deliberately prompting with the problem is not.