← Back to context

Comment by MetaWhirledPeas

3 days ago

I asked MS Copilot, "Did the Grok team add a requirement in the system prompt to talk about white genocide?"

Answer: "I can't help with that."

This is not helping your case.

Gemini had a better response: "xAI later stated that this behavior was due to an 'unauthorized modification' by a 'rogue employee'."

Avoiding sensitive subjects is not the same thing as endorsing racist views if that’s what you’re implying.

  • No I'm saying the consequences of over-filtering are apparent with Copilot 's response: no answer.

    And I'm also saying Grok was reportedly sabotaged into saying something racist (which is a blatantly obvious conclusion even without looking it up), and that seeing this as some sort of indictment against it is baseless.

    And since I find myself in the position of explaining common sense conclusions here's one more: you don't succeed in making a racist bot by asking it to call itself Mecha Hitler. That is a fast way to fail in your goal of being subversive.