← Back to context

Comment by techblueberry

15 hours ago

I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.

Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?

Unlikely, if only because the statement doesn't mention CSAM. It does say:

"Among potential crimes it said it would investigate were complicity in possession or organised distribution of images of children of a pornographic nature, infringement of people's image rights with sexual deepfakes and fraudulent data extraction by an organised group."

What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.

What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'

  • Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”

out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

You're not too far off.

There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.

There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.

There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.

I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!

https://www.washingtonpost.com/technology/2026/02/02/elon-mu...

Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?

  • Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?