Comment by Terretta
1 day ago
Author (who also replied to you) might have been "doing it wrong" but no wonder, Anthropic only made Claude Code smarter about this 5 days ago and there's too much to keep up with:
https://github.com/anthropics/claude-code-security-review
The new command is something like /security-review and should be in the loop before any PR or commit especially for this type of web-facing app, which Claude Code makes easy.
This prompt will make Claude's code generally beat not just intern code, but probably most devs' code, for security mindedness:
https://raw.githubusercontent.com/anthropics/claude-code-sec...
The false positives judge shown here is particularly well done.
// Beyond that, run tools such as Kusari or Snyk. It's unlikely most shops have security engineers as qualified as these focused tools are becoming.
How can an LLM determine a confidence score for its findings?