Comment by cubefox

6 hours ago

> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

That's why this is an investigation looking for evidence and not a conviction.

This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.

On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.

  • Note that the raid itself is a punishment. It's normal for them to seize all electronic devices. How is X France supposed to do any business without any electronic devices? And even when charges are dropped, the devices are never returned.

Grok do seem to have tons of useless guardrails. Reportedly you can't prompt it directly. But also reportedly they tend to go for almost nonsensically off-guardrail interpretation of prompts.

Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.