Comment by intheitmines
3 days ago
Anyone using Claude for processing sensitive information should be wondering how often it ends up in front of a humans eyes as a false positive
3 days ago
Anyone using Claude for processing sensitive information should be wondering how often it ends up in front of a humans eyes as a false positive
Anyone using non-self hosted AI for the processing of sensitive information should be let go. It's pretty much intentional disclosure at this point.
Worst local (Australia) example of that
(subscription required for full text): https://www.crikey.com.au/2025/11/12/australia-national-secu...
It matters as he's the most senior Australian national security bureaucrat across five eyes documents (AU / EU / US) and has been doing things that makes the actual cyber security talent's eyes bleed.
Holy crap that is such a bad look. That guy should immediately step down and if he doesn't he should be let go.
1 reply →
Years ago people routinely uploaded all kinds of sensitive corporate and government docs to VirusTotal to scan for malware. Paying customers then got access to those files for research. The opportunities for insider trading were, maybe still are, immense. Data from AI companies won't be as easy to get at, but is comparable in substance I'm sure.
https://www.theregister.com/2023/07/21/virustotal_data_expos...
That's absolutely insane. Aren't they owned by Google?
1 reply →
How is your comment related to this article?
It looks like Anrhropic has great visibility into what hackers do. Why would it also see what legitimate users do?