← Back to context

Comment by latexr

9 hours ago

> The whole artificial scarcity Anthropic created around Mythos / Glasswing is quite brilliant to be honest

Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.

https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf

Right, but in Aesop’s fable, the wolf did eventually come. It’s asymmetric, because in this case the wolf is not coming for the boy, it’s coming for everybody else

  • The boy isn't crying wolf strictly to save himself. He does it to get the attention of the town, knowing they'll come to the aid of the livestock he's been tasked with watching. Yes, their aid is primarily to save the boy, but the danger is still to the larger community rather than isolated to the lookout.

They way they've published hashes of the bugs it has found so that once those bugs are fixed they can responsibly disclose them while also proving that they weren't lying... that displays a willingness to dabble in evidence which is far beyond anything OpenAI has done to support their claims.

  • This. I see much cheap naysaying without referenece to the vuln hashes. If it is smoke and mirrors, then the naysayers should loudly shout down the specific hashes and when they get revealed, or don't, then they will have done a great service to dissuading fake claims to world changing tech.

>Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway

One of the many reasons nobody should give Scam Altman their money. It's continually infuriating that this serial grifter is in such a position of power.

It was from GPT-2 and Dario was part of the developers of that model while he was working in OpenAI, not Sam Altman, it's his playbook

Anthropic has not in fact released it, and it does in fact appear to be that dangerous, judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg.

Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.

  • The flood of reports that open source projects like curl, Linux and Chromium are getting are presumably due to public models like Open 4.6 that released earlier this year, and not models with limited availability.

  • > judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg

    Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?

  • Those vulnerabilities were found by open models as well.

    • Not really. The models were pointed specifically at the location of the vulnerability and given some extra guidance. That's an easier problem than simply being pointed at the entire code base.

      1 reply →