Comment by cubefox
4 hours ago
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
That's why this is an investigation looking for evidence and not a conviction.
This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.
On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
For more evidence:
https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Also, X seem to disagree with you and admit that CSAM was being generated:
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This is because of government pressure (see Ofcom link).
I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.
1 reply →
> First of all, the Guardian is known to be heavily biased again Musk.
Says who? Musk?
boot taste good
That is only "known" to intellectually dishonest ideologues.