Comment by rando77
2 days ago
I've been thinking about using LLMs to help triage security vulnerabilities.
If done in an auditably unlogged environment (with a limited output to the company, just saying escalate) it might also encourage people to share vulns they are worried about putting online.
Does that make sense from your experience?
[1] https://github.com/eb4890/echoresponse/blob/main/design.md
I definitely think it's a viable idea! Someone like Hackerone or Bugcrowd would be especially well poised to build this since they can look at historical reports, see which ones ended up being investigated or getting bounties, and use the to validate or inform the LLM system.
The 2nd order effects of this, when reporters expect an LLM to be validating their report, may get tricky. But ultimately if it's only passing a "likely warrants investigation" signal and has very few false negatives, it sounds useful.
With trust and security though, I still feel like some human needs to be ultimately responsible for closing each bad report as "invalid" and never purely relying on the LLM. But it sounds useful for elevating valid high severity reports and assisting the human ultimately responsible.
Though it does feels like a hard product to build from scratch, but easy for existing bug bounty systems to add.