← Back to context

Comment by moolcool

19 hours ago

> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

I'm of two minds about this.

One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.

On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.

The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.

  • It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.

    If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.

  • Grok is publishing the CSAM photos for everyone to see. It is used as a tool for harassment and abuse, literally.

Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?

  • This is like comparing the danger of a machine gun to that of a block of lead.

    • May be. We do have codified in law definition of machine gun which clearly separates it from a block of lead. What codified in law definitions are used here to separate photoshop from Grok in the context of those deepfakes and CSAM?

      Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.

      1 reply →

> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

  • Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

    For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.

    • First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

      https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

      The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.

      5 replies →