Comment by jacquesm
3 days ago
Anyone using non-self hosted AI for the processing of sensitive information should be let go. It's pretty much intentional disclosure at this point.
3 days ago
Anyone using non-self hosted AI for the processing of sensitive information should be let go. It's pretty much intentional disclosure at this point.
Worst local (Australia) example of that
(subscription required for full text): https://www.crikey.com.au/2025/11/12/australia-national-secu...
It matters as he's the most senior Australian national security bureaucrat across five eyes documents (AU / EU / US) and has been doing things that makes the actual cyber security talent's eyes bleed.
Holy crap that is such a bad look. That guy should immediately step down and if he doesn't he should be let go.
That wasn’t my first thought. My first thought was; every senior executive everywhere is probably doing the same thing.
Years ago people routinely uploaded all kinds of sensitive corporate and government docs to VirusTotal to scan for malware. Paying customers then got access to those files for research. The opportunities for insider trading were, maybe still are, immense. Data from AI companies won't be as easy to get at, but is comparable in substance I'm sure.
https://www.theregister.com/2023/07/21/virustotal_data_expos...
That's absolutely insane. Aren't they owned by Google?
They are now, although to be clear there was (is?) nothing nefarious going on, just people not understanding that public submissions are available to VirusTotal's paying users. These days VT has private scanning, too, but the issue was always one-offs from random finance or investor relations teams.
It's come up here and there in security, too, e.g. in https://www.directdefense.com/harvesting-cb-response-data-le....