← Back to context

Comment by josephg

1 day ago

Do we know that? I'd love to see some of the ways security researchers are using LLMs. We have no idea if claude was using fuzzing here, or just reading the files and spotting bugs directly in the source code.

A few weeks ago someone talked about their method for finding bugs in linux. They prompted claude with "Find the security bug in this program. Hint: It is probably in file X.". And they did that for every file in the repo.

> Since then, this weakness has been missed by every fuzzer and human who has reviewed the code, and points to the qualitative difference that advanced language models provide. [^1]

> At no point in time does the program take some easy-to-identify action that should be prohibited, and so tools like fuzzers can’t easily identify such weaknesses. [^2]

[^1]: https://red.anthropic.com/2026/mythos-preview/#:~:text=Since...

[^2]: https://red.anthropic.com/2026/mythos-preview/#:~:text=At%20...