← Back to context

Comment by ragnese

3 days ago

You probably know better, and I probably should know better than to bother engaging, but...

Why would you conflate giving a computer an objective command with what is essentially someone else giving you access to query a very large database of "information" that was already curated by human beings?

Look. I don't know Elon Musk, but his rhetoric and his behavior over the last several years has made it very clear to me that he has opinions about things and is willing to use his resources to push those opinions. At the end of the day, I simply don't trust him to NOT intentionally bias *any* tool or platform he has influence over.

Would you still see it as "censoring" a LLM if instead of front-loading some context/prompt info, they just chose to exclude certain information they didn't like from the training data? Because Mr. Musk has said, publicly, that he thinks Grok has been trained on too much "mainstream media" and that's why it sometimes provides answers on Twitter that he doesn't like, and that he was "working on it." If Mr. Musk goes in and messes around with the default prompts and/or training data to get the answers that align with his opinions, is that not censorship? Or is it only censorship when the prompt is changed to not repeat racist and antisemitic rhetoric?

The handwringing over an LLM creator shaping a narrative is somewhat absurd compared to the alternatives we had prior to Grok: LLMs that literally erased white people from history to align with their creators far-left progressive politics.

The difference here is many techies are more comfortable with LLMs censoring, or even rewriting history, as they align with their politics and prejudices.

Musk has attempted to provide a more balanced view I don’t consider just censorship. If he’s restricting the LLMs from including mainstream media viewpoints, I would consider that to be censorship, but I haven’t seen evidence of that.