Comment by jm4
3 days ago
I don't know why anyone would bother with Grok when there are other good models from companies that don't have the same baggage as xAI. So what if they release a model that beats older models in a benchmark? It will only be the top model until someone else releases another one next week. Personally, I like the Anthropic models for daily use. Even Google, with their baggage and lack of privacy, is a far cry from xAI and offers similar performance.
i like grok because i don't hit the obvious ML-fairness / political correct safeguards that other models do.
So i understand the intent in implementing those, but they also reduce perceived trust and utility. It's a tradeoff.
Let's say I'm using Gemini. I can tell by the latency or the redraw that I asked an "inappropriate" query.
They do implement censorship and safeguards, just in the opposite direction. Musk previously bragged about going through the data and "fixing" the biases. Which... just introduces bias when companies like xAI do it. You can do that, and researchers sometimes do, but obviously partisan actors won't actually be cleaning any bias, but rather introducing their own.
Sort of. There are biases introduced during training/post training and there are the additional runtime / inference safeguards.
I’m referring more to the runtime safeguards, but also the post-training biases.
Yes we are talking about degree, but the degree matters .
Some people think it’s a feature that when you prompt a computer system to do something, it does that thing, rather than censoring the result or giving you a lecture.
Perhaps you feel that other people shouldn’t be trusted with that much freedom, but as a user, why would you want to shackle yourself to a censored language model?
That’s what the Anthropic models do for me. I suppose I could be biased because I’ve never had a need for a model that spews racist, bigoted or sexist responses. The stuff @grok recently posted about Linda Yaccarino is a good example of why I don’t use it. But you do you.
You probably know better, and I probably should know better than to bother engaging, but...
Why would you conflate giving a computer an objective command with what is essentially someone else giving you access to query a very large database of "information" that was already curated by human beings?
Look. I don't know Elon Musk, but his rhetoric and his behavior over the last several years has made it very clear to me that he has opinions about things and is willing to use his resources to push those opinions. At the end of the day, I simply don't trust him to NOT intentionally bias *any* tool or platform he has influence over.
Would you still see it as "censoring" a LLM if instead of front-loading some context/prompt info, they just chose to exclude certain information they didn't like from the training data? Because Mr. Musk has said, publicly, that he thinks Grok has been trained on too much "mainstream media" and that's why it sometimes provides answers on Twitter that he doesn't like, and that he was "working on it." If Mr. Musk goes in and messes around with the default prompts and/or training data to get the answers that align with his opinions, is that not censorship? Or is it only censorship when the prompt is changed to not repeat racist and antisemitic rhetoric?