← Back to context

Comment by jcranmer

4 days ago

There have been a few recent instances where Grok has been tuned to spew out white supremacist dreck that should be political anathema--most notably the "but let's talk about white genocide" phase a few months ago and more recently spewing out Nazi antisemitism. Now granted, those were probably caused more by the specific prompts being used than the underlying model, but if the owner is willing to twist its output to evince a particular political bias, what trust do you have that he isn't doing so to the actual training data?

xAI has over 1000 employees. If he was polluting the model we would know about.

Why should these topics be outright banned?