Comment by mrguyorama
3 days ago
>This is what less filtered looks like
It's so "less filtered" that they had to add a requirement in the system prompt to talk about white genocide
This idea that "less filtered" LLMs will be "naturally" very racist is something that a lot of racists really really want to be true because they want to believe their racist views are backed by data.
They are not.
I asked MS Copilot, "Did the Grok team add a requirement in the system prompt to talk about white genocide?"
Answer: "I can't help with that."
This is not helping your case.
Gemini had a better response: "xAI later stated that this behavior was due to an 'unauthorized modification' by a 'rogue employee'."
If you're asking a coding LLM about facts I don't really think you are capable of evaluating the case at all.
If you wish to do better, please enlighten us with facts and sources.
1 reply →
Avoiding sensitive subjects is not the same thing as endorsing racist views if that’s what you’re implying.
No I'm saying the consequences of over-filtering are apparent with Copilot 's response: no answer.
And I'm also saying Grok was reportedly sabotaged into saying something racist (which is a blatantly obvious conclusion even without looking it up), and that seeing this as some sort of indictment against it is baseless.
And since I find myself in the position of explaining common sense conclusions here's one more: you don't succeed in making a racist bot by asking it to call itself Mecha Hitler. That is a fast way to fail in your goal of being subversive.
[dead]