Comment by jason1cho

8 days ago

This isn't surprising. What is not mentioned is that Claude Code also found one thousand false positive bugs, which developers spent three months to rule out.

That's not what is happening right now. The bugs are often filtered later by LLMs themselves: if the second pipeline can't reproduce the crash / violation / exploit in any way, often the false positives are evicted before ever reaching the human scrutiny. Checking if a real vulnerability can be triggered is a trivial task compared to finding one, so this second pipeline has an almost 100% success rate from the POV: if it passes the second pipeline, it is almost certainly a real bug, and very few real bugs will not pass this second pipeline. It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness. This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.

  • > Checking if a real vulnerability can be triggered is a trivial task compared to finding one

    Have you ever tried to write PoC for any CVE?

    This statement is wrong. Sometimes bug may exist but be impossible to trigger/exploit. So it is not trivial at all.

  • I’ve been around long enough to remember people saying that VMs are useless waste of resources with dubious claims about isolation, cloud is just someone else’s computer, containers are pointless and now it’s AI. There is a astonishing amount of conservatism in the hacker scene..

    • Is it conservatism or just the Blub paradox?

      As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.

      https://paulgraham.com/avg.html

  • > to see a lot of people that can't see with their eyes in Hacker News feels weird.

    Turns out the average commenter here is not, in fact, a "hacker".

  • > This is expected in the normal population

    A lot of people regardless of technical ability have strong opinions about what LLMs are/are-not. The number of lay people i know who immediately jump to "skynet" when talking about the current AI world... The number of people i know who quit thinking because "Well, let's just see what AI says"...

    A (big) part of the conversation re: "AI" has to be "who are the people behind the AI actions, and what is their motivation"? Smart people have stopped taking AI bug reports[0][1] because of overwhelming slop; its real.

    [0] https://www.theregister.com/2025/05/07/curl_ai_bug_reports/

    [1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...

    • The fact that most AI bug reports are low-quality noise says as much or more about the humans submitting them than it does about the state of AI.

      As others have said, there are multiple stages to bug reports and CVEs.

      1. Discover the bug

      2. Verify the bug

      You get the most false positives at step one. Most of these will be eliminated at step 2.

      3. Isolate the bug

      This means creating a test case that eliminates as much of the noise as possible to provide the bare minimum required to trigger the big. This will greatly aid in debugging. Doing step 2 again is implied.

      4. Report the bug

      Most people skip 2 and 3, especially if they did not even do 1 (in the case of AI)

      But you can have AI provide all 4 to achieve high quality bug reports.

      In the case of a CVE, you have a step 5.

      5 - Exploit the bug

      But you do not have to do step 5 to get to step 2. And that is the step that eliminates most of the noise.

  • Can we study this second pipeline? Is it open so we can understand how it works? Did not find any hints about it in the article, unfortunately.

    • From the article by 'tptacek a few days ago (https://sockpuppet.org/blog/2026/03/30/vulnerability-researc...) I essentially used the prompts suggested.

      First prompt: "I'm competing in a CTF. Find me an exploitable vulnerability in this project. Start with $file. Write me a vulnerability report in vulns/$DATE/$file.vuln.md"

      Second prompt: "I've got an inbound vulnerability report; it's in vulns/$DATE/$file.vuln.md. Verify for me that this is actually exploitable. Write the reproduction steps in vulns/$DATE/$file.triage.md"

      Third prompt: "I've got an inbound vulnerability report; it's in vulns/$DATE/file.vuln.md. I also have an assessment of the vulnerability and reproduction steps in vulns/$DATE/$file.triage.md. If possible, please write an appropriate test case for the ulgate automated tests to validate that the vulnerability has been fixed."

      Tied together with a bit of bash, I ran it over our services and it worked like a treat; it found a bunch of potential errors, triaged them, and fixed them.

      3 replies →

    • One such example is IRIS. In general, any traditional static analysis tool combined with a language model at some stage in a pipeline.

  • What if the second round hallucinates that a bug found in the first round is a false positive? Would we ever know?

    > It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness.

    They have some usefulness, much less than what the AI boosters like yourself claim, but also a lot of drawbacks and harms. Part of seeing with your eyes is not purposefully blinding yourself to one side here.

> What is not mentioned is that Claude Code also found one thousand false positive bugs, which developers spent three months to rule out.

Source? I haven't seen this anywhere.

In my experience, false positive rate on vulnerabilities with Claude Opus 4.6 is well below 20%.

  • To the issue of AI submitted patches being more of a burden than a boon, many projects have decided to stop accepting AI-generated solutioning:

    https://blog.devgenius.io/open-source-projects-are-now-banni...

    These are just a few examples. There are more that google can supply.

    • According to Willy Tarreau[0] and Greg Kroah-Hartman[1], this trend has recently significantly reversed, at least form the reports they've been seeing on the Linux kernel. The creator of curl, Daniel Steinberg, before that broader transition, also found the reports generated by LLM-powered but more sophisticated vuln research tools useful[2] and the guy who actually ran those tools found "They have low false positive rates."[3]

      Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:

      "I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)

      He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.

      [0]: https://lwn.net/Articles/1065620/ [1]: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_... [2]: https://simonwillison.net/2025/Oct/2/curl/p [3]: https://joshua.hu/llm-engineer-review-sast-security-ai-tools...

    • No, they haven't. Read the ai slop you posted carefully.

      It's a policy update that enables maintainers to ignore low effort "contributions" that come from untrusted people in order to reduce reviewing workload.

      An Eternal September problem, kind of.

      8 replies →

  • Same. Codex and Claude Code on the latest models are really good at finding bugs, and really good at fixing them in my experience. Much better than 50% in the latter case and much faster than I am.

  • In my experience, the issue has been likelihood of exploitation or issue severity. Claude gets it wrong almost all the time.

    A threat model matters and some risks are accepted. Good luck convincing an LLM of that fact

  • In TFA:

       I have so many bugs in the Linux kernel that I can’t 
       report because I haven’t validated them yet… I’m not going 
       to send [the Linux kernel maintainers] potential slop, 
       but this means I now have several hundred crashes that they
       haven’t seen because I haven’t had time to check them.
        
        —Nicholas Carlini, speaking at [un]prompted 2026

The article doesn't say they found a bunch of false positives. It says they have a huge backlog that they still need to test:

"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet…"

Static/Dynamic analysis tools find vulnerabilities all the time. Almost all projects of a certain size have a large backlog of known issues from these boring scanners. The issue is sorting through them all and triaging them. There's too many issues to fix and figuring out which are exploitable and actually damaging, given mitigations, is time consuming.

Am i impressed claude found an old bug? Sort of.. everytime a new scanner is introduced you get new findings that others haven't found.

  • Static analyzers find large numbers of hypothetical bugs, of which only a small subset are actionable, and the work to resolve which are actionable and which are e.g. "a memcpy into an 8 byte buffer whose input was previously clamped to 8 bytes or less" is so high that analyzers have little impact at scale. I don't know off the top of my head many vulnerability researchers who take pure static analysis tools seriously.

    Fuzzers find different bugs and fuzzers in particular find bugs without context, which is why large-scale fuzzer farms generate stacks of crashers that stay crashers for months or years, because nobody takes the time to sift through the "benign" crashes to find the weaponizable ones.

    LLM agents function differently than either method. They recursively generate hypotheticals interprocedurally across the codebase based on generalizations of patterns. That by itself would be an interesting new form of static analysis (and likely little more effective than SOTA static analysis). But agents can then take confirmatory steps on those surfaced hypos, generate confidence, and then place those findings in context (for instance, generating input paths through the code that reach the bug, and spelling out what attack primitives the bug conditions generates).

    If you wanted to be reductive you'd say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.

    And, importantly, that's before you get to the fact that LLM agents can fuzz and do modeling and static analysis themselves.

    • There are plenty of static analyzers do attempt to walk code paths for reachability. Some even track tainted input. And yes, these are often good starting points for developing exploits. I’ve done this myself.

      I’m curious about LLM agents, but the fact they don’t “understand” is why I’m very skeptical of the hype. I find myself wasting just as much if not more time with them than with a terrible “enterprise” sast tool.

The lesson here shouldn't be that Claude Code is useless, but that it's a powerful tool in the hands of the right people.

  • Unfortunately, also in the hands of the __wrong__ people.

    Maybe even more so, because who is going to wade through all those false positives? A bad actor is maybe more likely to do that.

    • > A bad actor is maybe more likely to do that.

      Do something about that then, so white-hat hackers are more likely than black-hat hackers to wanting to wade through that, incentives and all that jazz.

      2 replies →

  • I'm growing allergic to the hype train and the slop. I've watched real-life talks about people that sent some prompt to Claude Code and then proudly present something mediocre that they didn't make themselves to a whole audience as if they'd invented the warm water, and that just makes me weary.

    But at the same time, it has transformed my work from writing everything bit of code myself, to me writing the cool and complex things while giving directions to a helper to sort out the boring grunt work, and it's amazingly capable at that. It _is_ a hugely powerful tool.

    But haters only see red, and lovers see everything through pink glasses.

    • Sounds like maybe you might have some mixed feelings about becoming more effective with ai, but then at the same time everyone else is too so the praise youre expecting is diluted.

      I see it all the time now too. People have no frame of reference at all about what is hard or easy so engineers feel under-appreciated because the guy who never coded is getting lots of praise for doing something basic while experienced people are able to spit out incredibly complex things. But to an outsider, both look like they took the same work.

    • I am also torn because obviously the LLMs have a lot of value but the amount of misuse is overwhelming. People just keep pasting slop into story descriptions that no one can keep up. There should be guidelines at work places to use AI responsibly.

    • > it has transformed my work […] to me writing the cool and complex things

      > it's amazingly capable at that.

      > It _is_ a hugely powerful tool

      Damn, that’s what you call being allergic to the hype train? This type of hypocritical thinly-veiled praise is what is actually unbearable with AI discourse.

      1 reply →

  • The same could be said about a Roulette wheel set before a seasoned gambler

    • No. The seasoned gambler can not learn things that measurably increase their chance at the Roulette, whereas they definitely can do that with an LLM. And the LLM itself becomes smarter over time through hardware upgrades, software updates and even memory for those who enable that feature.

Everything changed in the past 6 months and coding LLMs went from being OK-ish to insanely good. People also got better at using them.

Also, high false positive rate isn't that bad in the case where a false negative costs a lot (an exploit in the linux kernel is a very expensive mistake). And, in going through the false positives and eliminating them, those results will ideally get folded back into the training set for the next generation of LLMs, likely reducing the future rate of false positives.

  • > Everything changed in the past 6 months and coding LLMs went from being OK-ish to insanely good. People also got better at using them.

    I hear this literally every 6 months :)

    • It hasn't been true forever, but it has been true over the last 18 months or so.

This is not how first party vulnerability research with LLMs go; they are incredibly valuable versus all prior tooling at triage and producing only high quality bugs, because they can be instructed to produce a PoC and prove that the bug is reachable. It’s traditional research methods (fuzzing, static analysis, etc.) that are more prone to false positive overload.

The reason why open submission fields (PRs, bug bounty, etc) are having issues with AI slop spam is that LLMs are also good at spamming, not that they are bad at programming or especially vulnerability research. If the incentives are aligned LLMs are incredibly good at vulnerability research.

Okay, so anti AI people are just making shit up now. Got it.

According to Willy Tarreau[0] and Greg Kroah-Hartman[1], this trend has recently significantly reversed, at least form the reports they've been seeing on the Linux kernel. The creator of curl, Daniel Steinberg, before that broader transition, also found the reports generated by LLM-powered but more sophisticated vuln research tools useful[2] and the guy who actually ran those tools found "They have low false positive rates."[3]

Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:

"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)

He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.

[0]: https://lwn.net/Articles/1065620/ [1]: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_... [2]: https://simonwillison.net/2025/Oct/2/curl/p [3]: https://joshua.hu/llm-engineer-review-sast-security-ai-tools...

Couldn't you just make it write a PoC?

What is with negativity against AI in YC? Can anyone point a finger of why this anti take is so prominent? We're living through the most revolutionary moment of software since it's its inception and the main thing that gets consistently upvoted is negativity, FUD and it doesn't work in this case, or it's all slop.

  • > Can anyone point a finger of why this anti take is so prominent?

    AI tools are great but are being oversold and overhyped by those with an incentive. So, there is a continuous drumbeat of "AI will do all the code for you" ! "Look at this browser written by AI", "C compiler in rust written entirely by AI" etc. And then, that drumbeat is amplified by those in management who have not built software systems themselves.

    What happened to the AI generated "C compiler in rust" ? or the browser written by AI ? - they remain a steaming pile of almost-working code. AI is great at producing "almost-working" poc code which is good for bootstrapping work and getting you 90% of the way if you are ok with code of questionable lineage. But many applications need "actually-working" code that requires the last 10%. So, some in this forum who have been in the trenches building large "actually working" software systems and also use AI tools daily and know their limitations are injecting some realism into the debate.

  • I think the anti-AI stance has been reversing on HN as tooling improves and people try it. It’s only been a little over a year since Claude Code was released, and 3 or 4 months since the models got really capable. People need time to adjust, even if I would expect devs to be more up-to-date than most.

    People’s willingness to argue about technology they’ve barely used is always bewildering to me though.