Comment by techblueberry
15 hours ago
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
15 hours ago
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?
Unlikely, if only because the statement doesn't mention CSAM. It does say:
"Among potential crimes it said it would investigate were complicity in possession or organised distribution of images of children of a pornographic nature, infringement of people's image rights with sexual deepfakes and fraudulent data extraction by an organised group."
I don't understand your point.
In a further comment you are using a US-focused organization to define an English-language acronym. How does this relate to a French investigation?
US uses English - quite a lot actually.
As for how it relates, well if the French do find that "Grok's CSAM Plan" file, they'll need to know what that acronym stands for. Right?
Item one in that list is CSAM.
You are mistaken. Item #1 is "images of children of a pornographic nature".
Wheras "CSAM isn’t pornography—it’s evidence of criminal exploitation of kids." https://rainn.org/get-informed/get-the-facts-about-sexual-vi...
14 replies →
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
They could shut it off out of a sense of decency and respect, wtf kind of defense is this?
1 reply →
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
> External analysts said Grok was generating a CSAM image every minute!!
> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.
Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?
Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?