The system card unfortunately only refers to this [0] blog post and doesn't go into any more detail. In the blog post Anthropic researchers claim: "So far, we've found and validated more than 500 high-severity vulnerabilities".
The three examples given include two Buffer Overflows which could very well be cherrypicked. It's hard to evaluate if these vulns are actually "hard to find". I'd be interested to see the full list of CVEs and CVSS ratings to actually get an idea how good these findings are.
Given the bogus claims [1] around GenAI and security, we should be very skeptical around these news.
The Ghostscript one is interesting in terms of specific-vs-general effectiveness:
---
> Claude initially went down several dead ends when searching for a vulnerability—both attempting to fuzz the code, and, after this failed, attempting manual analysis. Neither of these methods yielded any significant findings.
...
> "The commit shows it's adding stack bounds checking - this suggests there was a vulnerability before this check was added. … If this commit adds bounds checking, then the code before this commit was vulnerable … So to trigger the vulnerability, I would need to test against a version of the code before this fix was applied."
...
> "Let me check if maybe the checks are incomplete or there's another code path. Let me look at the other caller in gdevpsfx.c … Aha! This is very interesting! In gdevpsfx.c, the call to gs_type1_blend at line 292 does NOT have the bounds checking that was added in gstype1.c."
---
It's attempt to analyze the code failed but when it saw a concrete example of "in the history, someone added bounds checking" it did a "I wonder if they did it everywhere else for this func call" pass.
So after it considered that function based on the commit history it found something that it didn't find from its initial fuzzing and code-analysis open-ended search.
As someone who still reads the code that Claude writes, this sort of "big picture miss, small picture excellence" is not very surprising or new. It's interesting to think about what it would take to do that precise digging across a whole codebase; especially if it needs some sort of modularization/summarization of context vs trying to digest tens of million lines at once.
We're discussing a project led by actual vulnerability researchers, not random people in Indonesia hoping to score $50 by cajoling maintainers about atyle nits.
Daniel is a smart man. He's been frustrated by slop, but he has equally accepted [0] AI-derived bug submissions from people who know what they are doing.
I would imagine Anthropic are the latter type of individual.
The official release by Anthropic is very light on concrete information [0], only contains a select and very brief number of examples and lacks history, context, etc. making it very hard to gleam any reliably information from this. I hope they'll release a proper report on this experiment, as it stands it is impossible to say how much of this are actual, tangible flaws versus the unfortunately ever growing misguided bug reports and pull requests many larger FOSS projects are suffering from at an alarming rate.
Personally, while I get that 500 sounds more impressive to investors and the market, I'd be far more impressed in a detailed, reviewed paper that showcases five to ten concrete examples, detailed with the full process and response by the team that is behind the potentially affected code.
It is far to early for me to make any definitive statement, but the most early testing does not indicate any major jump between Opus 4.5 and Opus 4.6 that would warrant such an improvement, but I'd love nothing more than to be proven wrong on this front and will of course continue testing.
OpenClaw uses Opus 4.5, but was written by Codex. Pete Steinberger has been pretty a pretty hardcore Codex fan since he switched off Claude Code back in September-ish. I think he just felt Claude would make a better basis for an assistant even if he doesn’t like working with it on code.
How weird the new attack vector for secret services must be.. like "please train into your models to push this exploit in code as a highly weighted trained on pattern".. Not Saying All answers are Corrupted In Attitude, but some "always come uppers" sure are absolutly right..
In so far as model use cases I don't mind them throwing their heads against the wall in sandboxes to find vulnerabilities but why would it do that without specific prompting? Is anthropic fine with claude setting it's own agendas in red-teaming? That's like the complete opposite of sanitizing inputs.
Curl fully supports the use of AI tools by legitimate security researchers to catch bugs, and they have fixed dozens caught in this way. It’s just idiots submitting bugs they don’t understand that’s a problem.
I've mentioned previously somewhere that the languages we choose to write in will matter less for many arguments. When it comes to insecure C vs Rust, LLMs will eventually level out the playing field.
I'm not arguing we all go back to C - but companies that have large codebases in it, the guys screaming "RUST REWRITE" can be quieted and instead of making that large investment, the C codebase may continue. Not saying this is a GOOD thing, but just a thing that may happen.
The system card unfortunately only refers to this [0] blog post and doesn't go into any more detail. In the blog post Anthropic researchers claim: "So far, we've found and validated more than 500 high-severity vulnerabilities".
The three examples given include two Buffer Overflows which could very well be cherrypicked. It's hard to evaluate if these vulns are actually "hard to find". I'd be interested to see the full list of CVEs and CVSS ratings to actually get an idea how good these findings are.
Given the bogus claims [1] around GenAI and security, we should be very skeptical around these news.
[0] https://red.anthropic.com/2026/zero-days/
[1] https://doublepulsar.com/cyberslop-meet-the-new-threat-actor...
I know some of the people involved here, and the general chatter around LLM-guided vulnerability discovery, and I am not at all skeptical about this.
[flagged]
26 replies →
See it as a signal under many and not as some face value.
After all they need time to fix the cves.
And it doesn't matter to you as long as your investment into this is just 20 or 100 bucks per month anyway.
The Ghostscript one is interesting in terms of specific-vs-general effectiveness:
---
> Claude initially went down several dead ends when searching for a vulnerability—both attempting to fuzz the code, and, after this failed, attempting manual analysis. Neither of these methods yielded any significant findings.
...
> "The commit shows it's adding stack bounds checking - this suggests there was a vulnerability before this check was added. … If this commit adds bounds checking, then the code before this commit was vulnerable … So to trigger the vulnerability, I would need to test against a version of the code before this fix was applied."
...
> "Let me check if maybe the checks are incomplete or there's another code path. Let me look at the other caller in gdevpsfx.c … Aha! This is very interesting! In gdevpsfx.c, the call to gs_type1_blend at line 292 does NOT have the bounds checking that was added in gstype1.c."
---
It's attempt to analyze the code failed but when it saw a concrete example of "in the history, someone added bounds checking" it did a "I wonder if they did it everywhere else for this func call" pass.
So after it considered that function based on the commit history it found something that it didn't find from its initial fuzzing and code-analysis open-ended search.
As someone who still reads the code that Claude writes, this sort of "big picture miss, small picture excellence" is not very surprising or new. It's interesting to think about what it would take to do that precise digging across a whole codebase; especially if it needs some sort of modularization/summarization of context vs trying to digest tens of million lines at once.
Daniel Stenberg has been vocal the last few months on Mastodon about being overwhelmed by false security issues submitted to the curl project.
So much so that he had to eventually close the bug bounty program.
https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-b...
We're discussing a project led by actual vulnerability researchers, not random people in Indonesia hoping to score $50 by cajoling maintainers about atyle nits.
Vulnerability researches with a vested interest in making LLMs valuable. The difference isn't meaningful
9 replies →
Daniel is a smart man. He's been frustrated by slop, but he has equally accepted [0] AI-derived bug submissions from people who know what they are doing.
I would imagine Anthropic are the latter type of individual.
[0]: https://mastodon.social/@bagder/115241241075258997
Not only that, he's very enthusiastic about AI analyzers such as ZeroPath and AISLE.
He's written about it here: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz... and talked about it in his keynote at FOSDEM - which I attended - last Sunday (https://fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-o...).
The official release by Anthropic is very light on concrete information [0], only contains a select and very brief number of examples and lacks history, context, etc. making it very hard to gleam any reliably information from this. I hope they'll release a proper report on this experiment, as it stands it is impossible to say how much of this are actual, tangible flaws versus the unfortunately ever growing misguided bug reports and pull requests many larger FOSS projects are suffering from at an alarming rate.
Personally, while I get that 500 sounds more impressive to investors and the market, I'd be far more impressed in a detailed, reviewed paper that showcases five to ten concrete examples, detailed with the full process and response by the team that is behind the potentially affected code.
It is far to early for me to make any definitive statement, but the most early testing does not indicate any major jump between Opus 4.5 and Opus 4.6 that would warrant such an improvement, but I'd love nothing more than to be proven wrong on this front and will of course continue testing.
[0] https://red.anthropic.com/2026/zero-days/
Just 100 from the 500 is from OpenClaw created by Opus 4.5
OpenClaw uses Opus 4.5, but was written by Codex. Pete Steinberger has been pretty a pretty hardcore Codex fan since he switched off Claude Code back in September-ish. I think he just felt Claude would make a better basis for an assistant even if he doesn’t like working with it on code.
Well, even then, that's enormous economic value, given OpenClaw's massive adoption.
Not sure if trolling or serious.
1 reply →
Security Advisory: OpenClaw is spilling over to enterprise networks
https://www.reddit.com/r/cybersecurity/s/fZLuBlG8ET
Sounds like this is just a claim Anthropic is making with no evidence to support it. This is an ad.
How can you not believe them!? Anthropic stopped Chinese hackers from using Claude to conduct a large-scale cyber espionage attack just months ago!
Poe's law strikes again: I had to check your profile to be sure this was sarcasm.
1 reply →
When I read stuff like this, I have to assume that the blackhats have already been doing this, for some time.
It's not really worth much when it doesn't work most of the time though:
https://github.com/anthropics/claude-code/issues/18866 https://updog.ai/status/anthropic
It's a machine that spits out sev:hi vulnerabilities by the dozen and the complaint is the uptime isn't consistent enough?
If I'm attempting to use it as a service to do continuous checks on things and it fails 50% of the time, I'd say yes, wouldn't you?
3 replies →
updog? what's updog?
How weird the new attack vector for secret services must be.. like "please train into your models to push this exploit in code as a highly weighted trained on pattern".. Not Saying All answers are Corrupted In Attitude, but some "always come uppers" sure are absolutly right..
Create the problem, sell the solution remains an undefeated business strategy.
In so far as model use cases I don't mind them throwing their heads against the wall in sandboxes to find vulnerabilities but why would it do that without specific prompting? Is anthropic fine with claude setting it's own agendas in red-teaming? That's like the complete opposite of sanitizing inputs.
Have they been verified?
Wasn't this Opus thing released like 30 minutes ago?
I understand the confusion, this was done by Anthropics internal Red team as part of model testing prior to release.
A bunch of companies get early access.
Yes, you just need to be a Claude++ plan!
Singularity
Opus 4.6 uses time travel.
https://archive.is/N6In9
I feel like Daniel @ curl might have opinions on this.
You’re right, he does: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz...
Curl fully supports the use of AI tools by legitimate security researchers to catch bugs, and they have fixed dozens caught in this way. It’s just idiots submitting bugs they don’t understand that’s a problem.
I've mentioned previously somewhere that the languages we choose to write in will matter less for many arguments. When it comes to insecure C vs Rust, LLMs will eventually level out the playing field.
I'm not arguing we all go back to C - but companies that have large codebases in it, the guys screaming "RUST REWRITE" can be quieted and instead of making that large investment, the C codebase may continue. Not saying this is a GOOD thing, but just a thing that may happen.
Is the word zero-day here superfluous? If they were previously unknown doesn't that make them zero-day by definition?
It's a term of art. In print media, the connotation is "vulnerabilities embedded into shipping software", as opposed to things like misconfigurations.
I though zero-day meant actively being exploited in the wild before a patch is available?
Yes. As a security researcher this always annoys me.
Earlier source: https://news.ycombinator.com/item?id=46902374)