Comment by intheitmines

3 days ago

Anyone using Claude for processing sensitive information should be wondering how often it ends up in front of a humans eyes as a false positive

Anyone using non-self hosted AI for the processing of sensitive information should be let go. It's pretty much intentional disclosure at this point.

  • Worst local (Australia) example of that

      Following a public statement by Hansford about his use of Microsoft's AI chatbot Copilot, Crikey obtained 50 documents containing his prompts...
    
      FOI logs reveal Australia's national security chief, Hamish Hansford, used the AI chatbot Copilot to write speeches and messages to his team. 
    

    (subscription required for full text): https://www.crikey.com.au/2025/11/12/australia-national-secu...

    It matters as he's the most senior Australian national security bureaucrat across five eyes documents (AU / EU / US) and has been doing things that makes the actual cyber security talent's eyes bleed.

  • Years ago people routinely uploaded all kinds of sensitive corporate and government docs to VirusTotal to scan for malware. Paying customers then got access to those files for research. The opportunities for insider trading were, maybe still are, immense. Data from AI companies won't be as easy to get at, but is comparable in substance I'm sure.

How is your comment related to this article?

  • It looks like Anrhropic has great visibility into what hackers do. Why would it also see what legitimate users do?