Comment by qnleigh

1 day ago

There is plenty of overhyping, no one denies that. But the antidote is not to dismiss everything. Ignore the words and look at the data.

In this case, I see a pretty strong case that this will significantly change computer security. They provide plenty of evidence that the models can create exploits autonomously, meaning that the cost of finding valuable security breaches will plummet once they're widely available.

You seem to see a "pretty strong case" from a bombastic press release.

Don't get me wrong, I do know the reality has changed. Even Greg K-H, the Linux stable maintainer, did recently note[1] that it's not funny any more:

"Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality," he said. "It was kind of funny. It didn't really worry us."

... "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now."

---

I agree that an antidote to the obnoxious hype is to pay attention to the actual capabilities and data. But let's not get too carried away.

[1] https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_...

  • Hadn’t been to a Kubecon in about a year as I’ve been tending to go to just the European ones. I definitely felt a much stronger this is real vibe at this event from people like Greg KH.

Is there any actual independent data though, or verification of any of these claims?

As it stands this is just a marketing programme for all involved.

  • What would be the product they're marketing by this campaign?

    • You don't market products, you market lifestyles/interests. Sell the sizzle, not the steak etc.

      For Anthropic it's "we own the big scary models, the AI security space, but it's ok we're responsible"

      For the partners it's "we're the Big Boys here and will look after your enterprise needs"

      None of it needs any more than anecdata and some nice, pre-approved, quotes.

      Every organisation does it.

  • That's pretty disingenuous, bordering on ridiculous.

    Do they have a record of lying to you? No.

    Go read the system card. It's a lot more tame than you think, peoples are taking pieces out of this and hyping it. Doesn't mean it's not valid.

Which sounds like a great thing. Less undiscovered security vulnerabilities

  • The only people panicking are probably those state level actors who were using these for their own benefit.

With the right prompting (mostly creating a narrative that justifies the subject matter as okay to perform) other models have already been doing this for me though. That’s another confusing bit for me about how this is portrayed and I refuse to believe I’m a revolutionary user right?

I mean I’m sitting on $10k worth of bug payouts right now partially because that was already a thing.

  • > Non-experts can also leverage Mythos Preview to find and exploit sophisticated vulnerabilities. Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit. In other cases, we’ve had researchers develop scaffolds that allow Mythos Preview to turn vulnerabilities into exploits without any human intervention.