Comment by latexr
6 hours ago
> The whole artificial scarcity Anthropic created around Mythos / Glasswing is quite brilliant to be honest
Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.
Right, but in Aesop’s fable, the wolf did eventually come. It’s asymmetric, because in this case the wolf is not coming for the boy, it’s coming for everybody else
The boy isn't crying wolf strictly to save himself. He does it to get the attention of the town, knowing they'll come to the aid of the livestock he's been tasked with watching. Yes, their aid is primarily to save the boy, but the danger is still to the larger community rather than isolated to the lookout.
They way they've published hashes of the bugs it has found so that once those bugs are fixed they can responsibly disclose them while also proving that they weren't lying... that displays a willingness to dabble in evidence which is far beyond anything OpenAI has done to support their claims.
This. I see much cheap naysaying without referenece to the vuln hashes. If it is smoke and mirrors, then the naysayers should loudly shout down the specific hashes and when they get revealed, or don't, then they will have done a great service to dissuading fake claims to world changing tech.
>Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway
One of the many reasons nobody should give Scam Altman their money. It's continually infuriating that this serial grifter is in such a position of power.
It was from GPT-2 and Dario was part of the developers of that model while he was working in OpenAI, not Sam Altman, it's his playbook
> It was from GPT-2
Prior to the released of GPT-5, Sam said he was scared of it and compared it to the Manhattan Project.
Not just Altman. Buffett said it also, more generally.
https://youtu.be/vZlMWF6iFZg
This is pretty much correct, but Mustafa Suleyman has probably been doing it longer.
Not just part of the developers, but rather "led the development of large language models like GPT-2 and GPT-3" as per his website.
https://darioamodei.com/
[flagged]
Anthropic has not in fact released it, and it does in fact appear to be that dangerous, judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg.
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
The flood of reports that open source projects like curl, Linux and Chromium are getting are presumably due to public models like Open 4.6 that released earlier this year, and not models with limited availability.
How many months till they release a better model than mythos to general audience?
Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1
A few months of restricting access to people they think will actually fix problems is a big deal. Obviously only an idiot would think it could or should be kept under wraps forever.
> judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
Some relevant links:
[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...
> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.
> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.
[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...
> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.
He has changed his opinion completely. Yes, the ratio has turned.
Yes:
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
https://mastodon.social/@bagder/116336957584445742
Those vulnerabilities were found by open models as well.
Partly true. I think the consensus was it wasn't comparable because Mythos swept the entire codebase and found the vulnerabilities, whereas the open models were told where to look for said vulnerabilities.
https://news.ycombinator.com/item?id=47732337
Not really. The models were pointed specifically at the location of the vulnerability and given some extra guidance. That's an easier problem than simply being pointed at the entire code base.
1 reply →