Comment by anabis
1 day ago
Not the parent poster, but besides copying the prompt in Youtube, you can make it cheaper by selecting representitive starting files by path or LLM embedding distance.
Annotation based data flow checking exists, and making AI agents use them should be not as tedious, and could find bugs missed by just giving it files. The result from data flow checks can be fed to AI agents to verify.
As a curious passerby what does such a prompt look like? Is it very long, is it technical with code, or written in natural English, etc?
Previous discussion: https://mtlynch.io/claude-code-found-linux-vulnerability/
That's neat, maybe this is analogous to those Olympiad LLM experiments. I am now curious what the runtime of such a simple query takes. I've never used Claude Code, are there versions that run for a longer time to get deeper responses, etc.