← Back to context

Comment by pixel_popping

17 hours ago

Yet, some people will still continue to say that "AI" isn't ready to replace (or strongly assist) our workflows, sure, some of the best humans devs left a vulnerability that serious (It's extremely serious, so many container as a service are vulnerable) for 9 years and an agent found it in 1 hour, maybe it's time to wake up and accept that it's UNSAFE to not use AI for security review as well?

A human security researcher found the core issue and an agent searched for where to apply it. I don’t think “an agent found it in one hour” is a fair summary of what happened.

  • "The starting insight — that splice() hands page-cache pages into the crypto subsystem and that scatterlist page provenance might be an under-explored bug class — came from human research by Taeyang Lee at Xint. From there, Xint Code scaled the audit across the entire crypto/ subsystem in roughly an hour. Copy Fail was the highest-severity finding in the run."

    So, if anything, this might argue against the presence of huge quantities of high-severity bugs in this part of the Linux kernel (that could be found by "Xint Code"-class scanning systems).

  • I was a bit rough, agreed, but the overall point is still correct, I kinda want to emphasize that I've also ran hundred of loops recently (combination of opus-4.6/gpt-5.4/gemini-3.1-pro-preview) toward a Rust codebase that we manage and that we deemed secure after many audits and found 2 serious issues as well in it, this was also audited externally by a third party that we've paid, which makes me genuinely scared of releasing anything without deep AI verification nowadays.

    Anybody has the same feeling?