Pasting a big batch of new code and asking Claude "what have I forgotten? Where are the bugs?" is a very persuasive on-ramp for developers new to AI. It spots threading & distributed system bugs that would have taken hours to uncover before, and where there isn't any other easy tooling.
I bet there's loads of cryptocurrency implementations being pored over right now - actual money on the table.
Do you not run into too many false positives around "ah, this thing you used here is known to be tricky, the issue is..."
I've seen that when prompting it to look for concurrency issues vs saying something more like "please inspect this rigorously to look for potential issues..."
Just in case you didn't read the full article, this is how they describe finding the bugs in the Linux kernel as well.
Since it's a large codebase, they go even more specific and hint that the bug is in file A, then try again with a hint that the bug is in file B, and so on.
As a meta activity, I like to run different codebases through the same bug-hunt prompt and compare the number found as a barometer of quality.
I was very impressed when the top three AIs all failed to find anything other than minor stylistic nitpicks in a huge blob of what to me looked like “spaghetti code” in LLVM.
Meanwhile at $dayjob the AI reviews all start with “This looks like someone’s failed attempt at…”
You just have to be careful because it will sometimes spot bugs you could never uncover because they’re not real. You can really see the pattern matching at work with really twisted code. It tends to look at things like lock free algorithms and declare it full of bugs regardless of whether it is or not.
I have seen it start on a sentence, get lost and finish it with something like "Scratch that, actually it's fine."
And if it's not giving me a reason I can understand for a bug, I'm not listening to it! Mostly it is showing me I've mixed up two parameters, forgotten to initialise something, or referenced a variable from a thread that I shouldn't have.
The immediate feedback means the bug usually gets a better-quality fix than it would if I had got fatigued hunting it down! So variables get renamed to make sure I can't get them mixed up, a function gets broken out. It puts me in the mind of "well make sure this idiot can't make that mistake again!"
I usually do several passes of "review our work. Look for things to clean up, simplify, or refactor." It does usually improve the quality quite a lot; then I rewind history to before, but keep the changes, and submit the same prompt again, until it reaches the point of diminishing returns.
ive gone down this rabbit hole and i dunno, sometimes claude chases a smoking gun that just isn't a smoking gun at all. if you ask him to help find a vulnerability he's not gonna come back empty handed even if there's nothing there, he might frame a nice to have as a critical problem. in my exp you have to have build tests that prove vulnerabilities in some way. otherwise he's just gonna rabbithole while failing to look at everything.
ive had some remarkable successes with claude and quite a few "well that was a total waste of time" efforts with claude. for the most part i think trying to do uncharted/ambitious work with claude is a huge coinflip. he's great for guardrailed and well understood outcomes though, but im a little burnt out and unexcited at hearing about the gigantic-claude exercises.
Not "hidden", but probably more like "no one bothered to look".
declares a 1024-byte owner ID, which is an unusually long but legal value for the owner ID.
When I'm designing protocols or writing code with variable-length elements, "what is the valid range of lengths?" is always at the front of my mind.
it uses a memory buffer that’s only 112 bytes. The denial message includes the owner ID, which can be up to 1024 bytes, bringing the total size of the message to 1056 bytes. The kernel writes 1056 bytes into a 112-byte buffer
This is something a lot of static analysers can easily find. Of course asking an LLM to "inspect all fixed-size buffers" may give you a bunch of hallucinations too, but could be a good starting point for further inspection.
"No one bothered to look" is how most vulnerabilities work. Systems development produces code artifacts with compounding complexity; it is extraordinarily difficult to keep up with it manually, as you know. A solution to that problem is big news.
Static analyzers will find all possible copies of unbounded data into smaller buffers (especially when the size of the target buffer is easily deduced). It will then report them whether or not every path to that code clamps the input. Which is why this approach doesn't work well in the Linux kernel in 2026.
With a capable static analyzer that is not true. In many common cases they can deduce the possible ranges of values based on branching checks along the data flow path, and if that range falls within the buffer then it does not report it.
> Not "hidden", but probably more like "no one bothered to look".
Well yeah. There weren't enough "someones" available to look. There are a finite number of qualified individuals with time available to look for bugs in OSS, resulting in a finite amount of bug finding capacity available in the world.
Or at least there was. That's what's changing as these models become competent enough to spot and validate bugs. That finite global capacity to find bugs is now increasing, and actual bugs are starting to be dredged up. This year will be very very interesting if models continue to increase in capability.
I was just thinking about this and what it means for closed source code.
Many people with skin in the game will be spending tokens on hardening OSS bits they use, maybe even part of their build pipelines, but if the code is closed you have to pay for that review yourself, making you rather uncompetitive.
You could say there's no change there, but the number of people who can run a Claude review and the number of people who can actually review a complicated codebase are several orders of magnitude apart.
Will some of them produce bad PRs? Probably. The battle will be to figure out how to filter them at scale.
> This is something a lot of static analysers can easily find.
And yet they didn't (either noone ran them, or they didn't find it, or they did find it but it was buried in hundreds of false positives) for 20+ years...
I find it funny that every time someone does something cool with LLMs, there's a bunch of takes like this: it was trivial, it's just not important, my dad could have done that in his sleep.
Remember Heartbleed in OpenSSL? That long predated LLMs, but same story: some bozo forgot how long something should/could be, and no one else bothered to check either.
And even if that's true (and it frequently is!), detractors usually miss the underlying and immense impact of "sleeping dad capability" equivalent artificial systems.
Horizontally scaling "sleeping dads" takes decades, but inference capacity for a sleeping dad equivalent model can be scaled instantly, assuming one has the hardware capacity for it. The world isn't really ready for a contraction of skill dissemination going from decades to minutes.
There’s the classic case of the Debian OpenSSL vulnerability, where technically illegal but practically secure code was turned into superficially correct but fundamentally insecure code in an attempt to fix a bug identified by a (dynamic, in this case) analyzer.
I replicated this experiment on several production codebases and got several crits. Lots of dupes, lots of false positives, lots of bugs that weren't actually exploitable, lots of accepted/ known risks. But also, crits!
I think this really needs to be party of the message. It's great that Claude found a vulnerability that apparently has been overlooked for a long time. It's even proper for Anthropic to tout the find. But we should all ask about the signal to nose ratio that would have been part of the process. If it only was successful... That would be worth touting, too. But I expect there was more noise than they'd care to admit.
Every time I read these titles, I wonder if people are for some reason pushing the narrative that Claude is way smarter than it really is, or if I'm using it wrong.
They want me to code AI-first, and the amount of hallucinations and weird bugs and inconsistencies that Claude produces is massive.
Lots of code that it pushes would NOT have passed a human/human code review 6 months ago.
Apart from obvious PR (if you would need to lean into AI wave a bit this of all places is it) and fanboyism which is just part of human nature, why can't both be true?
It can properly excel in some things while being less than helpful in others. These are computers from the beginning, 1000x rehashed and now with an extra twist.
It's always the inconsistencies which amaze me, from the article:
> I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet
You have "so many?" Are they uncountable for some reason? You "haven't validated" them? How long does that take?
> found a total of five Linux vulnerabilities
And how much did it cost you in compute time to find those 5?
These articles are always fantastically light on the details which would make their case for them. Instead it's always breathless prognostication. I'm deeply suspicious of this.
>And how much did it cost you in compute time to find those 5?
This is the last thing I'd worry about if the bug is serious in any way. You have attackers like nation states that will have huge budgets to rip your software apart with AI and exploit your users.
Also there have been a number of detailed articles about AI security findings recently.
You are suspicious because you probably haven't worked anywhere that's AI-first. Anyone that's worked at a modern tech company will find this absolutely believable.
Like what, you expect Nicholas to test each vuln when he has more important work to do (ie his actual job?)
Not OC, but I tried OpenCode with Gemini, Claude and Kimi, and all of them were completely unable to solve any non-trivial problems which are not easily solved with some existing algorithm.
I understand how people use those tools if all they do is build CRUD endpoints and UIs for those endpoints (which is admittedly what most programmers probably do for their job). But for anything that requires any sort of problem solving skills, I don't understand how people use them. I feel like I live in a completely different world from some of the people who push agentic coding.
this is always overlooked. AI stories sound like "with right attitude, you too can win 10M $ in lottery, like this man just did"
Running LLM on 1000 functions produces 10000 reports (these numbers are accurate because I just generated them) — of course only the lottery winners who pulled the actually correct report from the bag will write an article in Evening Post
Tokens are insanely cheap at the moment.
Through OpenRouter a message to Sonnet costs about $0.001 cents or using Devstral 2512 it's about $0.0001.
An extended coding session/feature expansion will cost me about $5 in credits.
Split up your codebase so you don't have to feed all of it into the LLM at once and it's a very reasonable.
It cost me ~$750 to find a tricky privilege escalation bug in a complex codebase where I knew the rough specs but didn't have the exploit. There are certainly still many other bugs like that in the codebase, and it would cost $100k-$1MM to explore the rest of the system that deeply with models at or above the capability of Opus 4.6.
It's definitely possible to do a basic pass for much less (I do this with autopen.dev), but it is still very expensive to exhaustively find the harder vulnerabilities.
Agentic tasks use up a huge amount of tokens compared to simple chatting. Every elementary interaction the model has with the outside world (even while doing something as simple as reading code from a large codebase) is a separate "chat" message and "response", and these add up very quickly.
Inference cost has dropped 300x in 3 years, no reason to think this won't keep happening with improvements on models, agent architecture and hardware.
Also, too many people are fixated with American models when Chinese ones deliver similar quality often at fraction of a cost.
From my tests, "personality" of an LLM, it's tendency to stick to prompts and not derail far outweights the low % digit of delta in benchmark performance.
Not to mention, different LLMs perform better at different tasks, and they are all particularly sensible to prompts and instructions.
“Thing x happened in the past, therefore it will continue to happen in the future” is perhaps one of the most, if not the most pervasive human-created fallacies anywhere.
I'm thinking about how much money Anthropic etc are making from intelligence services who are running Opus 4.6 on ultra high settings 24 hours a day to find these kinds of exploits and take advantage of them before others do.
Expensive for me and you, but peanuts for a nation state.
I'm interested in the implications for the open source movement, specifically about security concerns. Anyone know is there has been a study about how well Claude Code works on closed source (but decompiled) source?
I’ve had Claude Code diagnose bugs in a compiler we wrote together by using gdb and objdump to examine binaries it produces. We don’t have DWARF support yet so it is just examining the binary. That’s not security work, but it’s adjacent to the sorts of skills you’re talking about. The binaries are way smaller than real programs, though.
> Claude Code works on closed source (but decompiled) source
Very likely not nearly as well, unless there are many open source libraries in use and/or the language+patterns used are extremely popular. The really huge win for something like the Linux kernel and other popular OSS is that the source appears in the training data, a lot. And many versions. So providing the source again and saying "find X" is primarily bringing into focus things it's already seen during training, with little novelty beyond the updates that happened after knowledge cutoff.
Giving it a closed source project containing a lot of novel code means it only has the language and it's "intuition" to work from, which is a far greater ask.
I’m not a security researcher, but I know a few and I think universally they’d disagree with this take.
The llms know about every previous disclosed security vulnerability class and can use that to pattern match. And they can do it against compiled and in some cases obfuscated code as easily as source.
I think the security engineers out there are terrified that the balance of power has shifted too far to the finding of closed source vulnerabilities because getting patches deployed will still take so long. Not that the llms are in some way hampered by novel code bases.
Definitely not my wheelhouse, but I would expect it to be considerably worse.
Simply because the source code contains names that were intended to communicate meaning in a way that the LLM is specifically trained to understand (i.e., by choosing identifier names from human natural language, choosing those names to scan well when interspersed into the programming language grammar, including comments etc.). At least if debugging information has been scrubbed, anyway (but the comments definitely are). Ghidra et. al. can only do so much to provide the kind of semantic content that an LLM is looking for.
I've cut-and-pasted some assembly code into the free version of ChatGPT to reverse engineer some old binaries and its ability to find meaning was just scary.
It would be much more interesting/efficient if the LLM had tokens for machine instructions so extracting instructions would be done at tokenizing phase, not by calling objdump.
But I guess I'm not the first one to have that idea. Any references to research papers would be welcome.
Now imagine how much more it could have derived if I had given it the full executable, with all the strings, pointers to those strings and whatnot.
I've done some minor reverse engineering of old test equipment binaries in the past and LLMs are incredible at figuring out what the code is doing, way better than the regular way of Ghidra to decompile code.
https://www.youtube.com/watch?v=1sd26pWhfmg is the presentation itself. The prompts are trivial; the bug (and others) looks real and well-explained - I'm still skeptical but this looks a lot more real/useful than anything a year ago even suggested was possible...
Supposedly humans have become “100x”™ more productive with these AI tools, but nowhere to be seen are the benefits for the wielders of said tools. Is your salary 100x higher? Are you able to spend more time with your family/friends instead of at the office? Why are we still putting up with these outdated work practices if LLMs have made everybody so much more productive?
Are you aware of how productivity has increased over the past century in general? That didn't lead to 100x wage increases or more free time. Labour is a market commodity and follows market rules. Increased productivity means more gets done in less time. It doesn't mean you spend less time working
There will always be more bugs than we can fix. AI can patch as well, but if your system is difficult to test and doesn't have rigorous validation you will likely get an unacceptable amount of regression.
I hope next up is the performance and bloat that the LLMs can try and improve.
Especially on perf side I would wager LLMs can go from meat sacks what ever works to how do I solve this with best available algorithm and architecture (that also follows some best practises).
making public that AI is able of founding that kind of vulnerabilities is a big problem. In this case it's nice that the vulnerability has been closed before publishing but in case a cracker founds it, the result would be extremately different. This kind of news only open eyes for the crackers.
This isn't surprising. What is not mentioned is that Claude Code also found one thousand false positive bugs, which developers spent three months to rule out.
That's not what is happening right now. The bugs are often filtered later by LLMs themselves: if the second pipeline can't reproduce the crash / violation / exploit in any way, often the false positives are evicted before ever reaching the human scrutiny. Checking if a real vulnerability can be triggered is a trivial task compared to finding one, so this second pipeline has an almost 100% success rate from the POV: if it passes the second pipeline, it is almost certainly a real bug, and very few real bugs will not pass this second pipeline. It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness. This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.
I’ve been around long enough to remember people saying that VMs are useless waste of resources with dubious claims about isolation, cloud is just someone else’s computer, containers are pointless and now it’s AI. There is a astonishing amount of conservatism in the hacker scene..
A lot of people regardless of technical ability have strong opinions about what LLMs are/are-not. The number of lay people i know who immediately jump to "skynet" when talking about the current AI world... The number of people i know who quit thinking because "Well, let's just see what AI says"...
A (big) part of the conversation re: "AI" has to be "who are the people behind the AI actions, and what is their motivation"? Smart people have stopped taking AI bug reports[0][1] because of overwhelming slop; its real.
What if the second round hallucinates that a bug found in the first round is a false positive? Would we ever know?
> It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness.
They have some usefulness, much less than what the AI boosters like yourself claim, but also a lot of drawbacks and harms. Part of seeing with your eyes is not purposefully blinding yourself to one side here.
Same. Codex and Claude Code on the latest models are really good at finding bugs, and really good at fixing them in my experience. Much better than 50% in the latter case and much faster than I am.
I have so many bugs in the Linux kernel that I can’t
report because I haven’t validated them yet… I’m not going
to send [the Linux kernel maintainers] potential slop,
but this means I now have several hundred crashes that they
haven’t seen because I haven’t had time to check them.
—Nicholas Carlini, speaking at [un]prompted 2026
Static/Dynamic analysis tools find vulnerabilities all the time. Almost all projects of a certain size have a large backlog of known issues from these boring scanners. The issue is sorting through them all and triaging them. There's too many issues to fix and figuring out which are exploitable and actually damaging, given mitigations, is time consuming.
Am i impressed claude found an old bug? Sort of.. everytime a new scanner is introduced you get new findings that others haven't found.
Static analyzers find large numbers of hypothetical bugs, of which only a small subset are actionable, and the work to resolve which are actionable and which are e.g. "a memcpy into an 8 byte buffer whose input was previously clamped to 8 bytes or less" is so high that analyzers have little impact at scale. I don't know off the top of my head many vulnerability researchers who take pure static analysis tools seriously.
Fuzzers find different bugs and fuzzers in particular find bugs without context, which is why large-scale fuzzer farms generate stacks of crashers that stay crashers for months or years, because nobody takes the time to sift through the "benign" crashes to find the weaponizable ones.
LLM agents function differently than either method. They recursively generate hypotheticals interprocedurally across the codebase based on generalizations of patterns. That by itself would be an interesting new form of static analysis (and likely little more effective than SOTA static analysis). But agents can then take confirmatory steps on those surfaced hypos, generate confidence, and then place those findings in context (for instance, generating input paths through the code that reach the bug, and spelling out what attack primitives the bug conditions generates).
If you wanted to be reductive you'd say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.
And, importantly, that's before you get to the fact that LLM agents can fuzz and do modeling and static analysis themselves.
I'm growing allergic to the hype train and the slop. I've watched real-life talks about people that sent some prompt to Claude Code and then proudly present something mediocre that they didn't make themselves to a whole audience as if they'd invented the warm water, and that just makes me weary.
But at the same time, it has transformed my work from writing everything bit of code myself, to me writing the cool and complex things while giving directions to a helper to sort out the boring grunt work, and it's amazingly capable at that. It _is_ a hugely powerful tool.
But haters only see red, and lovers see everything through pink glasses.
Everything changed in the past 6 months and coding LLMs went from being OK-ish to insanely good. People also got better at using them.
Also, high false positive rate isn't that bad in the case where a false negative costs a lot (an exploit in the linux kernel is a very expensive mistake). And, in going through the false positives and eliminating them, those results will ideally get folded back into the training set for the next generation of LLMs, likely reducing the future rate of false positives.
This is not how first party vulnerability research with LLMs go; they are incredibly valuable versus all prior tooling at triage and producing only high quality bugs, because they can be instructed to produce a PoC and prove that the bug is reachable. It’s traditional research methods (fuzzing, static analysis, etc.) that are more prone to false positive overload.
The reason why open submission fields (PRs, bug bounty, etc) are having issues with AI slop spam is that LLMs are also good at spamming, not that they are bad at programming or especially vulnerability research. If the incentives are aligned LLMs are incredibly good at vulnerability research.
Okay, so anti AI people are just making shit up now. Got it.
According to Willy Tarreau[0] and Greg Kroah-Hartman[1], this trend has recently significantly reversed, at least form the reports they've been seeing on the Linux kernel. The creator of curl, Daniel Steinberg, before that broader transition, also found the reports generated by LLM-powered but more sophisticated vuln research tools useful[2] and the guy who actually ran those tools found "They have low false positive rates."[3]
Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)
He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.
Yes, you can. I strongly encourage people skeptical about this, and who know at a high-level how this kind of exploitation works, to just try it. Have Claude or Codex (they have different strengths at this kind of work) set up a testing harness with Firecracker or QEMU, and then work through having it build an exploit.
What is with negativity against AI in YC? Can anyone point a finger of why this anti take is so prominent? We're living through the most revolutionary moment of software since it's its inception and the main thing that gets consistently upvoted is negativity, FUD and it doesn't work in this case, or it's all slop.
> Can anyone point a finger of why this anti take is so prominent?
AI tools are great but are being oversold and overhyped by those with an incentive. So, there is a continuous drumbeat of "AI will do all the code for you" ! "Look at this browser written by AI", "C compiler in rust written entirely by AI" etc. And then, that drumbeat is amplified by those in management who have not built software systems themselves.
What happened to the AI generated "C compiler in rust" ? or the browser written by AI ? - they remain a steaming pile of almost-working code. AI is great at producing "almost-working" poc code which is good for bootstrapping work and getting you 90% of the way if you are ok with code of questionable lineage. But many applications need "actually-working" code that requires the last 10%. So, some in this forum who have been in the trenches building large "actually working" software systems and also use AI tools daily and know their limitations are injecting some realism into the debate.
I think the anti-AI stance has been reversing on HN as tooling improves and people try it. It’s only been a little over a year since Claude Code was released, and 3 or 4 months since the models got really capable. People need time to adjust, even if I would expect devs to be more up-to-date than most.
People’s willingness to argue about technology they’ve barely used is always bewildering to me though.
Every single post here these days. “Startup founder of Communality.ai says ai good for people” and then the comments are AI bros declaring that all work can end, the good times are here at last
Thank you for your kind comment. I recommend you watch the actual talk, and then understand what exploiting RCEs in things like the Linux kernel at such a scale that defenders can no longer keep up with actually means. The latter is their claim, not mine.
Also realize that, unlike a security researcher, an attacker doesn't necessarily need to review the model out carefully to filter out the slop before a bug submission. They mostly just need to run the shit.
It was Opus 4.6 (the model). You could discover this with some other coding agent harness.
The other thing that bugs me and frankly I don't have the time to try it out myself, is that they did not compare to see if the same bug would have been found with GPT 5.4 or perhaps even an open source model.
Without that, and for the reasons I posted above, while I am sure this is not the intention, the post reads like an ad for claude code.
I don't understand this critique. Carlini did use Claude Code directly. Claude Code used the Claude Opus 4.6 model, but I don't know why you'd consider it inaccurate to say Claude Code found it.
GPT 5.4 might be capable of finding it as well, but the article never made any claims about whether non-Anthropic models could find it.
If I wrote about achieving 10k QPS with a Go server, is the article misleading unless I enumerate every other technology that could have achieved the same thing?
Also, he did compare with earlier versions that, before 4.5, were dramatically worse at finding the same problems. There's even a graph. That seems to pretty solidly support the idea that this is "gain of function" as it were...
No the title is correct and you are misreading or didn't read. It was found with Claude code, that's the quote. This isn't a model eval, it's an Anthropic employee talking about Claude code. So comparing to other models isn't a thing to reasonably expect.
> Nicholas has found hundreds more potential bugs in the Linux kernel, but the bottleneck to fixing them is the manual step of humans sorting through all of Claude’s findings
No, the problem is sorting out thousands of false positives from claude code's reports. 5 out of 1000+ reports to be valid is statistically worse than running a fuzzer on the codebase.
> 5 out of 1000+ reports to be valid is statistically worse than running a fuzzer on the codebase.
Carlini said "hundreds" of crashes, not 1000+.
It's not that only 5 were true positives and the rest were false positives. 5 were true positives and Carlini doesn't have bandwidth to review the rest. Presumably he's reviewed more than 5 and some were not worth reporting, but we don't know what that number is. It's almost certainly not hundreds.
Keep in mind that Carlini's not a dedicated security engineer for Linux. He's seeing what's possible with LLMs and his team is simultaneously exploring the Linux kernel, Firefox,[0] GhostScript, OpenSC,[1] and probably lots of others that they can't disclose because they're not yet fixed.
> On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us. ... Also it's interesting to keep thinking that these bugs are within reach from criminals so they deserve to get fixed.
Code review is the real deal for these models. This area seems largely underappreciated to me. Especially for things like C++, where static analysis tools have traditionally generated too many false positives to be useful, the LLMs seem especially good. I'm no black hat but have found similarly old bugs at my own place. Even if shit is hallucinated half the time, it still pays off when it finds that really nasty bug.
Instead, people seem to be infatuated with vibe coding technical debt at scale.
>Instead, people seem to be infatuated with vibe coding technical debt at scale.
Don't blame them. That is what AI marketing pushes. And people are sheep to marketing..
I understand why AI companies don't want to promote it. Because they understand that the LCD/Majority of their client base won't see code review as a critical part of their business. If LLMs are marketed as best suited for code review, then they probably cannot justify the investments that they are getting...
Real deal in this case or not does not necessarily mean the Claude code usage is a positive net gain to the software security overall. In fact it is likely the opposite.
It will hurt some CC heavy user’s feeling but that’s a different thing.
A developer using Claude Code found this bug. Claude is a tool. It is used by developers. It should not sign commits. Neovim never tried to sign commits with me, nor Zed.
Should not Is that your new law? The non-agentic “Neovim and Zed *never tried to sign commits [for]~~with~~ me” therefore no tool ever no matter how advanced is not allowed to sign a commit.
Did it ever occur to you that for whatever reason you just might not be cut out for the software treadmill?
Pasting a big batch of new code and asking Claude "what have I forgotten? Where are the bugs?" is a very persuasive on-ramp for developers new to AI. It spots threading & distributed system bugs that would have taken hours to uncover before, and where there isn't any other easy tooling.
I bet there's loads of cryptocurrency implementations being pored over right now - actual money on the table.
I like biasing it towards the fact that there is a bug, so it can't just say "no bugs! all good!" without looking into it very hard.
Usually I ask something like this:
"This code has a bug. Can you find it?"
Sometimes I also tell it that "the bug is non-obvious"
Which I've anecdotally found to have a higher rate of success than just asking for a spot check
Do you not run into too many false positives around "ah, this thing you used here is known to be tricky, the issue is..."
I've seen that when prompting it to look for concurrency issues vs saying something more like "please inspect this rigorously to look for potential issues..."
4 replies →
Just in case you didn't read the full article, this is how they describe finding the bugs in the Linux kernel as well.
Since it's a large codebase, they go even more specific and hint that the bug is in file A, then try again with a hint that the bug is in file B, and so on.
1 reply →
As a meta activity, I like to run different codebases through the same bug-hunt prompt and compare the number found as a barometer of quality.
I was very impressed when the top three AIs all failed to find anything other than minor stylistic nitpicks in a huge blob of what to me looked like “spaghetti code” in LLVM.
Meanwhile at $dayjob the AI reviews all start with “This looks like someone’s failed attempt at…”
> so it can't just say "no bugs! all good!"
If anyone, or anything, ever answers a question like that, you should stop asking it questions.
You just have to be careful because it will sometimes spot bugs you could never uncover because they’re not real. You can really see the pattern matching at work with really twisted code. It tends to look at things like lock free algorithms and declare it full of bugs regardless of whether it is or not.
I have seen it start on a sentence, get lost and finish it with something like "Scratch that, actually it's fine."
And if it's not giving me a reason I can understand for a bug, I'm not listening to it! Mostly it is showing me I've mixed up two parameters, forgotten to initialise something, or referenced a variable from a thread that I shouldn't have.
The immediate feedback means the bug usually gets a better-quality fix than it would if I had got fatigued hunting it down! So variables get renamed to make sure I can't get them mixed up, a function gets broken out. It puts me in the mind of "well make sure this idiot can't make that mistake again!"
> Pasting a big batch of new code and asking Claude "what have I forgotten? Where are the bugs?"
It's actually the main way I use CC/codex.
I find Codex sufficiently better for it that I’ve taught Claude how to shell out to it for code reviews
3 replies →
[dead]
I usually do several passes of "review our work. Look for things to clean up, simplify, or refactor." It does usually improve the quality quite a lot; then I rewind history to before, but keep the changes, and submit the same prompt again, until it reaches the point of diminishing returns.
> It spots threading & distributed system bugs that would have taken hours to uncover before, and where there isn't any other easy tooling.
Go has a built in race detector which may be useful for this too: https://go.dev/doc/articles/race_detector
Unsure if it's suitable for inclusion in CI, but seems like something worth looking into for people using Go.
ive gone down this rabbit hole and i dunno, sometimes claude chases a smoking gun that just isn't a smoking gun at all. if you ask him to help find a vulnerability he's not gonna come back empty handed even if there's nothing there, he might frame a nice to have as a critical problem. in my exp you have to have build tests that prove vulnerabilities in some way. otherwise he's just gonna rabbithole while failing to look at everything.
ive had some remarkable successes with claude and quite a few "well that was a total waste of time" efforts with claude. for the most part i think trying to do uncharted/ambitious work with claude is a huge coinflip. he's great for guardrailed and well understood outcomes though, but im a little burnt out and unexcited at hearing about the gigantic-claude exercises.
> "Codex wrote this, can you spot anything weird?"
[dead]
[dead]
Not "hidden", but probably more like "no one bothered to look".
declares a 1024-byte owner ID, which is an unusually long but legal value for the owner ID.
When I'm designing protocols or writing code with variable-length elements, "what is the valid range of lengths?" is always at the front of my mind.
it uses a memory buffer that’s only 112 bytes. The denial message includes the owner ID, which can be up to 1024 bytes, bringing the total size of the message to 1056 bytes. The kernel writes 1056 bytes into a 112-byte buffer
This is something a lot of static analysers can easily find. Of course asking an LLM to "inspect all fixed-size buffers" may give you a bunch of hallucinations too, but could be a good starting point for further inspection.
"No one bothered to look" is how most vulnerabilities work. Systems development produces code artifacts with compounding complexity; it is extraordinarily difficult to keep up with it manually, as you know. A solution to that problem is big news.
Static analyzers will find all possible copies of unbounded data into smaller buffers (especially when the size of the target buffer is easily deduced). It will then report them whether or not every path to that code clamps the input. Which is why this approach doesn't work well in the Linux kernel in 2026.
With a capable static analyzer that is not true. In many common cases they can deduce the possible ranges of values based on branching checks along the data flow path, and if that range falls within the buffer then it does not report it.
4 replies →
But what was the likelihood of this bug to be exploited by malicious actors?
1 reply →
> Not "hidden", but probably more like "no one bothered to look".
Well yeah. There weren't enough "someones" available to look. There are a finite number of qualified individuals with time available to look for bugs in OSS, resulting in a finite amount of bug finding capacity available in the world.
Or at least there was. That's what's changing as these models become competent enough to spot and validate bugs. That finite global capacity to find bugs is now increasing, and actual bugs are starting to be dredged up. This year will be very very interesting if models continue to increase in capability.
I was just thinking about this and what it means for closed source code.
Many people with skin in the game will be spending tokens on hardening OSS bits they use, maybe even part of their build pipelines, but if the code is closed you have to pay for that review yourself, making you rather uncompetitive.
You could say there's no change there, but the number of people who can run a Claude review and the number of people who can actually review a complicated codebase are several orders of magnitude apart.
Will some of them produce bad PRs? Probably. The battle will be to figure out how to filter them at scale.
1 reply →
> This is something a lot of static analysers can easily find.
And yet they didn't (either noone ran them, or they didn't find it, or they did find it but it was buried in hundreds of false positives) for 20+ years...
I find it funny that every time someone does something cool with LLMs, there's a bunch of takes like this: it was trivial, it's just not important, my dad could have done that in his sleep.
Remember Heartbleed in OpenSSL? That long predated LLMs, but same story: some bozo forgot how long something should/could be, and no one else bothered to check either.
4 replies →
It's much, much, easier to run an LLM than to use a static or dynamic analyzer correctly. At the very least, the UI has improved massively with "AI".
1 reply →
And even if that's true (and it frequently is!), detractors usually miss the underlying and immense impact of "sleeping dad capability" equivalent artificial systems.
Horizontally scaling "sleeping dads" takes decades, but inference capacity for a sleeping dad equivalent model can be scaled instantly, assuming one has the hardware capacity for it. The world isn't really ready for a contraction of skill dissemination going from decades to minutes.
There’s the classic case of the Debian OpenSSL vulnerability, where technically illegal but practically secure code was turned into superficially correct but fundamentally insecure code in an attempt to fix a bug identified by a (dynamic, in this case) analyzer.
Most likely no-one runned them, given the developer culture.
I replicated this experiment on several production codebases and got several crits. Lots of dupes, lots of false positives, lots of bugs that weren't actually exploitable, lots of accepted/ known risks. But also, crits!
I think this really needs to be party of the message. It's great that Claude found a vulnerability that apparently has been overlooked for a long time. It's even proper for Anthropic to tout the find. But we should all ask about the signal to nose ratio that would have been part of the process. If it only was successful... That would be worth touting, too. But I expect there was more noise than they'd care to admit.
Or put another way, the context matters.
I have to agree with you. We don’t talk nearly enough about the real signal to nose ratio.
(Sorry. I couldn’t resist lol)
Every time I read these titles, I wonder if people are for some reason pushing the narrative that Claude is way smarter than it really is, or if I'm using it wrong.
They want me to code AI-first, and the amount of hallucinations and weird bugs and inconsistencies that Claude produces is massive.
Lots of code that it pushes would NOT have passed a human/human code review 6 months ago.
Apart from obvious PR (if you would need to lean into AI wave a bit this of all places is it) and fanboyism which is just part of human nature, why can't both be true?
It can properly excel in some things while being less than helpful in others. These are computers from the beginning, 1000x rehashed and now with an extra twist.
It's always the inconsistencies which amaze me, from the article:
> I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet
You have "so many?" Are they uncountable for some reason? You "haven't validated" them? How long does that take?
> found a total of five Linux vulnerabilities
And how much did it cost you in compute time to find those 5?
These articles are always fantastically light on the details which would make their case for them. Instead it's always breathless prognostication. I'm deeply suspicious of this.
>And how much did it cost you in compute time to find those 5?
This is the last thing I'd worry about if the bug is serious in any way. You have attackers like nation states that will have huge budgets to rip your software apart with AI and exploit your users.
Also there have been a number of detailed articles about AI security findings recently.
1 reply →
I'd be interested in how it compares (in terms of time, money and false positives) with fuzzing.
You are suspicious because you probably haven't worked anywhere that's AI-first. Anyone that's worked at a modern tech company will find this absolutely believable.
Like what, you expect Nicholas to test each vuln when he has more important work to do (ie his actual job?)
What models are you using, on what type of codebases, with what tools?
Not OC, but I tried OpenCode with Gemini, Claude and Kimi, and all of them were completely unable to solve any non-trivial problems which are not easily solved with some existing algorithm.
I understand how people use those tools if all they do is build CRUD endpoints and UIs for those endpoints (which is admittedly what most programmers probably do for their job). But for anything that requires any sort of problem solving skills, I don't understand how people use them. I feel like I live in a completely different world from some of the people who push agentic coding.
I'm using Claude Code with the latest version of Sonnet, using the official VS Code extension.
At my company they set it up that way.
> "given enough eyeballs, all bugs are shallow"
Time to update that:
"given 1 million tokens context window, all bugs are shallow"
Already happend: https://arxiv.org/abs/2407.08708
more like some bugs are shallow and others are pieced together false-positives from an automated tool reliable in its unreliability.
..and three months to review the false positives
this is always overlooked. AI stories sound like "with right attitude, you too can win 10M $ in lottery, like this man just did"
Running LLM on 1000 functions produces 10000 reports (these numbers are accurate because I just generated them) — of course only the lottery winners who pulled the actually correct report from the bag will write an article in Evening Post
1 reply →
Those 3 letter agencies are going to see their stash of 0-days dwindle so hard.
Their stash will explode. LLMs can do this on binaries just the same, and there's a lot more closed than open source SW out there.
And they also have a nearly infinite budget to rent AI time to do this type of work.
Interestingly, I think 3 or 4 out of the 5 bugs would have been prevented / mitigated quite well using https://github.com/anthraxx/linux-hardened patches...
(disabled io_uring, would have crashed the kernel on UAF, and made exploitation of the heap overflow very unreliable)
Related work from our security lab:
Stream of vulnerabilities discovered using security agents (23 so far this year): https://securitylab.github.com/ai-agents/
Taskflow harness to run (on your own terms): https://github.blog/security/how-to-scan-for-vulnerabilities...
This does sound great, but the cost of tokens will prevent most companies from using agents to secure their code.
Tokens are insanely cheap at the moment. Through OpenRouter a message to Sonnet costs about $0.001 cents or using Devstral 2512 it's about $0.0001. An extended coding session/feature expansion will cost me about $5 in credits. Split up your codebase so you don't have to feed all of it into the LLM at once and it's a very reasonable.
It cost me ~$750 to find a tricky privilege escalation bug in a complex codebase where I knew the rough specs but didn't have the exploit. There are certainly still many other bugs like that in the codebase, and it would cost $100k-$1MM to explore the rest of the system that deeply with models at or above the capability of Opus 4.6.
It's definitely possible to do a basic pass for much less (I do this with autopen.dev), but it is still very expensive to exhaustively find the harder vulnerabilities.
20 replies →
>$0.001 cents
$0.001 (1/10 of a cent) or 0.001 cents (1/1000 of a cent, or $0.00001)?
1 reply →
Agentic tasks use up a huge amount of tokens compared to simple chatting. Every elementary interaction the model has with the outside world (even while doing something as simple as reading code from a large codebase) is a separate "chat" message and "response", and these add up very quickly.
You’d have to ignore the massive investor ROI expectations or somehow have no capability to look past “at the moment”.
7 replies →
I don't buy it.
Inference cost has dropped 300x in 3 years, no reason to think this won't keep happening with improvements on models, agent architecture and hardware.
Also, too many people are fixated with American models when Chinese ones deliver similar quality often at fraction of a cost.
From my tests, "personality" of an LLM, it's tendency to stick to prompts and not derail far outweights the low % digit of delta in benchmark performance.
Not to mention, different LLMs perform better at different tasks, and they are all particularly sensible to prompts and instructions.
“Thing x happened in the past, therefore it will continue to happen in the future” is perhaps one of the most, if not the most pervasive human-created fallacies anywhere.
Tokens aren't more expensive than highly trained meatbags today. There's no way they'll be more expensive "tomorrow"...
[flagged]
4 replies →
I'm thinking about how much money Anthropic etc are making from intelligence services who are running Opus 4.6 on ultra high settings 24 hours a day to find these kinds of exploits and take advantage of them before others do.
Expensive for me and you, but peanuts for a nation state.
I'm interested in the implications for the open source movement, specifically about security concerns. Anyone know is there has been a study about how well Claude Code works on closed source (but decompiled) source?
I’ve had Claude Code diagnose bugs in a compiler we wrote together by using gdb and objdump to examine binaries it produces. We don’t have DWARF support yet so it is just examining the binary. That’s not security work, but it’s adjacent to the sorts of skills you’re talking about. The binaries are way smaller than real programs, though.
> Claude Code works on closed source (but decompiled) source
Very likely not nearly as well, unless there are many open source libraries in use and/or the language+patterns used are extremely popular. The really huge win for something like the Linux kernel and other popular OSS is that the source appears in the training data, a lot. And many versions. So providing the source again and saying "find X" is primarily bringing into focus things it's already seen during training, with little novelty beyond the updates that happened after knowledge cutoff.
Giving it a closed source project containing a lot of novel code means it only has the language and it's "intuition" to work from, which is a far greater ask.
I’m not a security researcher, but I know a few and I think universally they’d disagree with this take.
The llms know about every previous disclosed security vulnerability class and can use that to pattern match. And they can do it against compiled and in some cases obfuscated code as easily as source.
I think the security engineers out there are terrified that the balance of power has shifted too far to the finding of closed source vulnerabilities because getting patches deployed will still take so long. Not that the llms are in some way hampered by novel code bases.
3 replies →
Definitely not my wheelhouse, but I would expect it to be considerably worse.
Simply because the source code contains names that were intended to communicate meaning in a way that the LLM is specifically trained to understand (i.e., by choosing identifier names from human natural language, choosing those names to scan well when interspersed into the programming language grammar, including comments etc.). At least if debugging information has been scrubbed, anyway (but the comments definitely are). Ghidra et. al. can only do so much to provide the kind of semantic content that an LLM is looking for.
I've cut-and-pasted some assembly code into the free version of ChatGPT to reverse engineer some old binaries and its ability to find meaning was just scary.
1 reply →
It would be much more interesting/efficient if the LLM had tokens for machine instructions so extracting instructions would be done at tokenizing phase, not by calling objdump.
But I guess I'm not the first one to have that idea. Any references to research papers would be welcome.
As an experiment, I just now took a random section of a few hundreds bytes (as a hexdump) from the /bin/ls executable and pasted them into ChatGPT.
I don't know if it's correct, but it speculated that it's part of a command line processor: https://chatgpt.com/share/69d19e4f-ff2c-83e8-bc55-3f7f5207c3...
Now imagine how much more it could have derived if I had given it the full executable, with all the strings, pointers to those strings and whatnot.
I've done some minor reverse engineering of old test equipment binaries in the past and LLMs are incredible at figuring out what the code is doing, way better than the regular way of Ghidra to decompile code.
Do not expect so many more reports. Expect so many more attacks ;)
I wonder about the "video running in the background" during qna of the talk:
https://youtu.be/1sd26pWhfmg?is=XLJX9gg0Zm1BKl_5
Did he write an exploit for the NFS bug that runs via network over USB? Seems to be plugging in a SoC over USB...?
An explanation of the Claude Opus 4.6 linux kernel security findings as presented by Nicholas Carlini at unpromptedcon.
https://www.youtube.com/watch?v=1sd26pWhfmg is the presentation itself. The prompts are trivial; the bug (and others) looks real and well-explained - I'm still skeptical but this looks a lot more real/useful than anything a year ago even suggested was possible...
Supposedly humans have become “100x”™ more productive with these AI tools, but nowhere to be seen are the benefits for the wielders of said tools. Is your salary 100x higher? Are you able to spend more time with your family/friends instead of at the office? Why are we still putting up with these outdated work practices if LLMs have made everybody so much more productive?
Are you aware of how productivity has increased over the past century in general? That didn't lead to 100x wage increases or more free time. Labour is a market commodity and follows market rules. Increased productivity means more gets done in less time. It doesn't mean you spend less time working
[dead]
And with AI generating vulnerabilities at an accelerated pace this business is only getting bigger. Welcome to the new antivirus!
There will always be more bugs than we can fix. AI can patch as well, but if your system is difficult to test and doesn't have rigorous validation you will likely get an unacceptable amount of regression.
I hope next up is the performance and bloat that the LLMs can try and improve.
Especially on perf side I would wager LLMs can go from meat sacks what ever works to how do I solve this with best available algorithm and architecture (that also follows some best practises).
making public that AI is able of founding that kind of vulnerabilities is a big problem. In this case it's nice that the vulnerability has been closed before publishing but in case a cracker founds it, the result would be extremately different. This kind of news only open eyes for the crackers.
This isn't surprising. What is not mentioned is that Claude Code also found one thousand false positive bugs, which developers spent three months to rule out.
That's not what is happening right now. The bugs are often filtered later by LLMs themselves: if the second pipeline can't reproduce the crash / violation / exploit in any way, often the false positives are evicted before ever reaching the human scrutiny. Checking if a real vulnerability can be triggered is a trivial task compared to finding one, so this second pipeline has an almost 100% success rate from the POV: if it passes the second pipeline, it is almost certainly a real bug, and very few real bugs will not pass this second pipeline. It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness. This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.
> Checking if a real vulnerability can be triggered is a trivial task compared to finding one
Have you ever tried to write PoC for any CVE?
This statement is wrong. Sometimes bug may exist but be impossible to trigger/exploit. So it is not trivial at all.
15 replies →
I’ve been around long enough to remember people saying that VMs are useless waste of resources with dubious claims about isolation, cloud is just someone else’s computer, containers are pointless and now it’s AI. There is a astonishing amount of conservatism in the hacker scene..
10 replies →
> to see a lot of people that can't see with their eyes in Hacker News feels weird.
Turns out the average commenter here is not, in fact, a "hacker".
> This is expected in the normal population
A lot of people regardless of technical ability have strong opinions about what LLMs are/are-not. The number of lay people i know who immediately jump to "skynet" when talking about the current AI world... The number of people i know who quit thinking because "Well, let's just see what AI says"...
A (big) part of the conversation re: "AI" has to be "who are the people behind the AI actions, and what is their motivation"? Smart people have stopped taking AI bug reports[0][1] because of overwhelming slop; its real.
[0] https://www.theregister.com/2025/05/07/curl_ai_bug_reports/
[1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
1 reply →
Can we study this second pipeline? Is it open so we can understand how it works? Did not find any hints about it in the article, unfortunately.
6 replies →
What if the second round hallucinates that a bug found in the first round is a false positive? Would we ever know?
> It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness.
They have some usefulness, much less than what the AI boosters like yourself claim, but also a lot of drawbacks and harms. Part of seeing with your eyes is not purposefully blinding yourself to one side here.
they are useful to those that enjoy wasting time.
>This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.
You are replying to an account created in less than 60 days.
5 replies →
> What is not mentioned is that Claude Code also found one thousand false positive bugs, which developers spent three months to rule out.
Source? I haven't seen this anywhere.
In my experience, false positive rate on vulnerabilities with Claude Opus 4.6 is well below 20%.
To the issue of AI submitted patches being more of a burden than a boon, many projects have decided to stop accepting AI-generated solutioning:
https://blog.devgenius.io/open-source-projects-are-now-banni...
These are just a few examples. There are more that google can supply.
11 replies →
Same. Codex and Claude Code on the latest models are really good at finding bugs, and really good at fixing them in my experience. Much better than 50% in the latter case and much faster than I am.
Source: """AI is bad"""
In my experience, the issue has been likelihood of exploitation or issue severity. Claude gets it wrong almost all the time.
A threat model matters and some risks are accepted. Good luck convincing an LLM of that fact
In TFA:
7 replies →
The article doesn't say they found a bunch of false positives. It says they have a huge backlog that they still need to test:
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet…"
[dead]
Static/Dynamic analysis tools find vulnerabilities all the time. Almost all projects of a certain size have a large backlog of known issues from these boring scanners. The issue is sorting through them all and triaging them. There's too many issues to fix and figuring out which are exploitable and actually damaging, given mitigations, is time consuming.
Am i impressed claude found an old bug? Sort of.. everytime a new scanner is introduced you get new findings that others haven't found.
Static analyzers find large numbers of hypothetical bugs, of which only a small subset are actionable, and the work to resolve which are actionable and which are e.g. "a memcpy into an 8 byte buffer whose input was previously clamped to 8 bytes or less" is so high that analyzers have little impact at scale. I don't know off the top of my head many vulnerability researchers who take pure static analysis tools seriously.
Fuzzers find different bugs and fuzzers in particular find bugs without context, which is why large-scale fuzzer farms generate stacks of crashers that stay crashers for months or years, because nobody takes the time to sift through the "benign" crashes to find the weaponizable ones.
LLM agents function differently than either method. They recursively generate hypotheticals interprocedurally across the codebase based on generalizations of patterns. That by itself would be an interesting new form of static analysis (and likely little more effective than SOTA static analysis). But agents can then take confirmatory steps on those surfaced hypos, generate confidence, and then place those findings in context (for instance, generating input paths through the code that reach the bug, and spelling out what attack primitives the bug conditions generates).
If you wanted to be reductive you'd say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.
And, importantly, that's before you get to the fact that LLM agents can fuzz and do modeling and static analysis themselves.
1 reply →
The lesson here shouldn't be that Claude Code is useless, but that it's a powerful tool in the hands of the right people.
Unfortunately, also in the hands of the __wrong__ people.
Maybe even more so, because who is going to wade through all those false positives? A bad actor is maybe more likely to do that.
3 replies →
I'm growing allergic to the hype train and the slop. I've watched real-life talks about people that sent some prompt to Claude Code and then proudly present something mediocre that they didn't make themselves to a whole audience as if they'd invented the warm water, and that just makes me weary.
But at the same time, it has transformed my work from writing everything bit of code myself, to me writing the cool and complex things while giving directions to a helper to sort out the boring grunt work, and it's amazingly capable at that. It _is_ a hugely powerful tool.
But haters only see red, and lovers see everything through pink glasses.
4 replies →
The lesson or the hype mantra?
The same could be said about a Roulette wheel set before a seasoned gambler
5 replies →
Everything changed in the past 6 months and coding LLMs went from being OK-ish to insanely good. People also got better at using them.
Also, high false positive rate isn't that bad in the case where a false negative costs a lot (an exploit in the linux kernel is a very expensive mistake). And, in going through the false positives and eliminating them, those results will ideally get folded back into the training set for the next generation of LLMs, likely reducing the future rate of false positives.
> Everything changed in the past 6 months and coding LLMs went from being OK-ish to insanely good. People also got better at using them.
I hear this literally every 6 months :)
1 reply →
This is not how first party vulnerability research with LLMs go; they are incredibly valuable versus all prior tooling at triage and producing only high quality bugs, because they can be instructed to produce a PoC and prove that the bug is reachable. It’s traditional research methods (fuzzing, static analysis, etc.) that are more prone to false positive overload.
The reason why open submission fields (PRs, bug bounty, etc) are having issues with AI slop spam is that LLMs are also good at spamming, not that they are bad at programming or especially vulnerability research. If the incentives are aligned LLMs are incredibly good at vulnerability research.
Okay, so anti AI people are just making shit up now. Got it.
According to Willy Tarreau[0] and Greg Kroah-Hartman[1], this trend has recently significantly reversed, at least form the reports they've been seeing on the Linux kernel. The creator of curl, Daniel Steinberg, before that broader transition, also found the reports generated by LLM-powered but more sophisticated vuln research tools useful[2] and the guy who actually ran those tools found "They have low false positive rates."[3]
Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)
He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.
[0]: https://lwn.net/Articles/1065620/ [1]: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_... [2]: https://simonwillison.net/2025/Oct/2/curl/p [3]: https://joshua.hu/llm-engineer-review-sast-security-ai-tools...
Couldn't you just make it write a PoC?
Yes, you can. I strongly encourage people skeptical about this, and who know at a high-level how this kind of exploitation works, to just try it. Have Claude or Codex (they have different strengths at this kind of work) set up a testing harness with Firecracker or QEMU, and then work through having it build an exploit.
Still have to validate it.
1 reply →
[flagged]
What is with negativity against AI in YC? Can anyone point a finger of why this anti take is so prominent? We're living through the most revolutionary moment of software since it's its inception and the main thing that gets consistently upvoted is negativity, FUD and it doesn't work in this case, or it's all slop.
> Can anyone point a finger of why this anti take is so prominent?
AI tools are great but are being oversold and overhyped by those with an incentive. So, there is a continuous drumbeat of "AI will do all the code for you" ! "Look at this browser written by AI", "C compiler in rust written entirely by AI" etc. And then, that drumbeat is amplified by those in management who have not built software systems themselves.
What happened to the AI generated "C compiler in rust" ? or the browser written by AI ? - they remain a steaming pile of almost-working code. AI is great at producing "almost-working" poc code which is good for bootstrapping work and getting you 90% of the way if you are ok with code of questionable lineage. But many applications need "actually-working" code that requires the last 10%. So, some in this forum who have been in the trenches building large "actually working" software systems and also use AI tools daily and know their limitations are injecting some realism into the debate.
I think the anti-AI stance has been reversing on HN as tooling improves and people try it. It’s only been a little over a year since Claude Code was released, and 3 or 4 months since the models got really capable. People need time to adjust, even if I would expect devs to be more up-to-date than most.
People’s willingness to argue about technology they’ve barely used is always bewildering to me though.
Not speaking for myself but the you won’t have a job soon narrative puts people off
On the other hand, some bugs take three months to find. So this still seems like a win.
You know this how?
From a recent front page article that mentioned the previous slop problem:
> Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.
https://news.ycombinator.com/item?id=47611921
[dead]
[flagged]
[flagged]
1 reply →
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
[dead]
[dead]
[dead]
[flagged]
[dead]
[flagged]
[flagged]
Every single post here these days. “Startup founder of Communality.ai says ai good for people” and then the comments are AI bros declaring that all work can end, the good times are here at last
[flagged]
[flagged]
Thank you for your kind comment. I recommend you watch the actual talk, and then understand what exploiting RCEs in things like the Linux kernel at such a scale that defenders can no longer keep up with actually means. The latter is their claim, not mine.
Also realize that, unlike a security researcher, an attacker doesn't necessarily need to review the model out carefully to filter out the slop before a bug submission. They mostly just need to run the shit.
5 replies →
More like, if you pay a fee to use a service, you can find the bombs already hidden somewhere in your premises.
6 replies →
The title is a little misleading.
It was Opus 4.6 (the model). You could discover this with some other coding agent harness.
The other thing that bugs me and frankly I don't have the time to try it out myself, is that they did not compare to see if the same bug would have been found with GPT 5.4 or perhaps even an open source model.
Without that, and for the reasons I posted above, while I am sure this is not the intention, the post reads like an ad for claude code.
OP here.
I don't understand this critique. Carlini did use Claude Code directly. Claude Code used the Claude Opus 4.6 model, but I don't know why you'd consider it inaccurate to say Claude Code found it.
GPT 5.4 might be capable of finding it as well, but the article never made any claims about whether non-Anthropic models could find it.
If I wrote about achieving 10k QPS with a Go server, is the article misleading unless I enumerate every other technology that could have achieved the same thing?
Also, he did compare with earlier versions that, before 4.5, were dramatically worse at finding the same problems. There's even a graph. That seems to pretty solidly support the idea that this is "gain of function" as it were...
No the title is correct and you are misreading or didn't read. It was found with Claude code, that's the quote. This isn't a model eval, it's an Anthropic employee talking about Claude code. So comparing to other models isn't a thing to reasonably expect.
> You could discover this with some other coding agent harness.
And surely that would be relevant if they were using a different harness.
> Nicholas has found hundreds more potential bugs in the Linux kernel, but the bottleneck to fixing them is the manual step of humans sorting through all of Claude’s findings
No, the problem is sorting out thousands of false positives from claude code's reports. 5 out of 1000+ reports to be valid is statistically worse than running a fuzzer on the codebase.
Just sayin'
> 5 out of 1000+ reports to be valid is statistically worse than running a fuzzer on the codebase.
Carlini said "hundreds" of crashes, not 1000+.
It's not that only 5 were true positives and the rest were false positives. 5 were true positives and Carlini doesn't have bandwidth to review the rest. Presumably he's reviewed more than 5 and some were not worth reporting, but we don't know what that number is. It's almost certainly not hundreds.
Keep in mind that Carlini's not a dedicated security engineer for Linux. He's seeing what's possible with LLMs and his team is simultaneously exploring the Linux kernel, Firefox,[0] GhostScript, OpenSC,[1] and probably lots of others that they can't disclose because they're not yet fixed.
[0] https://www.anthropic.com/news/mozilla-firefox-security
[1] https://red.anthropic.com/2026/zero-days/
> On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us. ... Also it's interesting to keep thinking that these bugs are within reach from criminals so they deserve to get fixed.
https://lwn.net/Articles/1065620/
> https://syzbot.org/upstream
I stand corrected.
1 reply →
But on the other hand, Claude might introduce more vulnerability than it discovered.
Code review is the real deal for these models. This area seems largely underappreciated to me. Especially for things like C++, where static analysis tools have traditionally generated too many false positives to be useful, the LLMs seem especially good. I'm no black hat but have found similarly old bugs at my own place. Even if shit is hallucinated half the time, it still pays off when it finds that really nasty bug.
Instead, people seem to be infatuated with vibe coding technical debt at scale.
> Code review is the real deal for these models.
Yea, that is what I have been saying as well...
>Instead, people seem to be infatuated with vibe coding technical debt at scale.
Don't blame them. That is what AI marketing pushes. And people are sheep to marketing..
I understand why AI companies don't want to promote it. Because they understand that the LCD/Majority of their client base won't see code review as a critical part of their business. If LLMs are marketed as best suited for code review, then they probably cannot justify the investments that they are getting...
Real deal in this case or not does not necessarily mean the Claude code usage is a positive net gain to the software security overall. In fact it is likely the opposite.
It will hurt some CC heavy user’s feeling but that’s a different thing.
[dead]
Guys please read the article before commenting...
A developer using Claude Code found this bug. Claude is a tool. It is used by developers. It should not sign commits. Neovim never tried to sign commits with me, nor Zed.
Should not Is that your new law? The non-agentic “Neovim and Zed *never tried to sign commits [for]~~with~~ me” therefore no tool ever no matter how advanced is not allowed to sign a commit.
Did it ever occur to you that for whatever reason you just might not be cut out for the software treadmill?