← Back to context

Comment by wordofx

4 days ago

Why wouldn’t you?

The only reason you wouldn’t is because you get upset with Elon. It’s not a bad model. It’s leagues ahead of anything meta has managed to produce.

Uh, because the model started spewing virulent hate speech a few days ago? What normal software does this?

  • Not the model itself, the X bot. Its obvious that this has happened due to them tweaking the bot, you could never get it to write anything like this a couple of weeks ago.

    • Can you trust the model when the people releasing it are using it in this way? Can you trust that they won't be training models to behave in the way that they are prompting the existing models to behave?

  • An acute memory will remember this happening with basically every chatbot trained on text scraped from the internet, before they had to explicitly program them to avoid doing that.

  • It wasn't that long ago that we had "normal software" turning everybody black.

    This is just how AI works, we humanize it so it's prone to controversy.

There have been a few recent instances where Grok has been tuned to spew out white supremacist dreck that should be political anathema--most notably the "but let's talk about white genocide" phase a few months ago and more recently spewing out Nazi antisemitism. Now granted, those were probably caused more by the specific prompts being used than the underlying model, but if the owner is willing to twist its output to evince a particular political bias, what trust do you have that he isn't doing so to the actual training data?

> Why wouldn’t you?

Because its poisoning the air in Tennessee?

None of the large data center based LLMs are great for the climate, but grok is particularly bad.