Comment by fcpk
19 hours ago
The fact there is no mention of what were the bugs is a little odd. It'd really be nice to see if this is a "weird never happening edge case" or actual issues. LLMs have uncanny abilities to identify failure patterns that it has seen before, but they are not necessarily meaningful.
You can find them linked [1] in the OG article from Anthropic [2].
[1] https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...
[2] https://www.anthropic.com/news/mozilla-firefox-security
The fact that some of the Claude-discovered bugs were quite severe is also a little more than something to brush off as "yeah, LLM, whatever". The lists reads quite meaningful to me, but I'm not a security expert anyways.
Here's a write-up for one of the bugs they found: https://red.anthropic.com/2026/exploit/
I’m guessing it might be some of these: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...
Yeah, the ones reported by Evyatar Ben Asher et al.
I correctly misread that as “et AI”.
9 replies →
Indeed, without it looks like a fluffy marketing piece.
And now that you know that it isn't, do you feel differently about the logic you used to write this comment?
i am curious, what are you hoping to get out of this comment? will you feel better if they say yes? what is your plan if they say no?
8 replies →
Do I?
1 reply →