← Back to context

Comment by user34283

11 hours ago

That is one of the reasons why I think X's Grok, while perhaps not state of the art, is an important option to have.

Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.

It is also the only provider out of those that permits use for legal adult content.

There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.

What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.

Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.

However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.

I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.

We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.

It's also a machine you can pay to generate child porn for you, owned by a guy who thinks this is hilarious and won't turn it off.

  • As much as I dislike Musk and friends, they're dumb/evil/incompetent enough to not have to lie and still get them.

  • Incorrect on all claims.

    They tightened safety measures to prevent editing of images of real people into revealing clothing. It is factually incorrect that you "can pay to generate CP".

    Musk has not described CSAM as "hilarious". In fact he stated that he was not aware of any naked underage images being generated by Grok, and that xAI would fix the bug immediately if such content was discovered.

    Earlier statements by xAI also emphasized a zero tolerance policy, removing content, taking actions against accounts, reporting to law enforcement and cooperation with authorities.

    I suspect you just post these slanderous claims anyway, despite knowing that they are incorrect.