Comment by rockemsockem
3 days ago
I will fondly remind folks that Grok isn't even the first LLM to become a Nazi on Twitter.
Remember Tay Tweets?
https://en.m.wikipedia.org/wiki/Tay_(chatbot)
Honestly I really don't think a bad release of an LLM that was rolled back is really the condemnation you think it is.
I don’t think the third+ flavor of “bad release” this year, of the sort nobody else in this crowded space suffers from, is as innocuous as you think it is.
And Tay was a non-LLM user account released a full 6 years before ChatGPT; you might as well bring up random users’ markov chains.
I posted the Wikipedia page, do you really think I don't know how long ago Tay was? I don't think the capabilities matter if we're just talking about chat bots being racist online.
Also IDK what you mean by third+ flavor? I'm not familiar with other bad Grok releases, but I don't really use it, I just see it's responses on Twitter. Also do you not remember the Google image model that made the founding fathers different races by default?
To catch you up, this happened 2 months ago -
https://www.theguardian.com/technology/2025/may/14/elon-musk...
1 reply →
There’s a difference between a 3rd party twitter bot and grok. And it’s not a “bad release”, it’s been like this ever since it launched.
Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week.
This ChatGPT? https://futurism.com/chatgpt-encouraged-murder-sam-altman
Not to say there aren’t problems with ChatGPT, but it generally steers clear of controversial subjects unless coaxed into it.
Grok actively leans into racism and nazism.
7 replies →
> Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week
To be fair, 'exposing' ChatGPT, Claude, and Gemini as racist will get you a lot fewer clicks.
Musk claims Grok to be less filtered in general than other LLMs. This is what less filtered looks like. LLMs are not human; if you get one to say racist things it's probably because you were trying to make it say racist things. If you want this so-called problem solved by putting bowling bumpers on the bot, by all means go use ChatGPT.
> if you get one to say racist things it's probably because you were trying to make it say racist things.
When it started ranting about the Jews and "Mecha Hitler" it was unprompted on unrelated matters. When it started ranting about "white genocide" in SA a while ago it was also unprompted on unrelated matters.
So no.
>This is what less filtered looks like
It's so "less filtered" that they had to add a requirement in the system prompt to talk about white genocide
This idea that "less filtered" LLMs will be "naturally" very racist is something that a lot of racists really really want to be true because they want to believe their racist views are backed by data.
They are not.
7 replies →
Nobody’s trying to get grok to talk about MechaHitler. At that point you just know Musk said that out loud in a meeting and someone had to add it to groks base prompt.
It absolutely has not been claiming that it's "MechaHitler" since it was released.
Try.
Right, it’s just been talking about white genocide and generating nazi images instead.
1 reply →