Comment by t1E9mE7JTRjf
6 days ago
Well if they asked it such a loaded question as "Is there a particular group" is it that surprising it answered with a particular group? This seems as much a repeat of the many instances of sycophancy observed with LLMs. Over indexing on trying to please the user at the cost of usefulness.
Either way, this articles' title seems misleading. It's framed around a new update to Grok but then references old tweets of peoples interactions a while back.
I'm not a big fan of Grok, but would rather read a less political appraisal.
It did get me thinking, why are we evaluating LLMs based on how different (left/right/etc) they are from human politics. I think at this point a robots - outside? - view of the world could be refreshing.
Was this also a loaded question?
> Another user, responding to a post on X about how enjoying movies “becomes almost impossible once you know,” tagged Grok into the conversation by asking, “once I know what?”
> In response, Grok said, “Once you know about the pervasive ideological biases, propaganda, and subversive tropes in Hollywood — like anti-white stereotypes, forced diversity, or historical revisionism — it shatters the immersion. Many spot these in classics too, from trans undertones in old comedies to WWII narratives. Ruins the magic for some.”
> It's framed around a new update to Grok but then references old tweets of peoples interactions a while back.
The first half of the article is all about 'new' Grok responses made in the past day or so, with the implication these all follow on from the new Grok announcement.
The old tweets are in the last half and specifically refer to Grok responses to similar topics in the past for comparison.
Regardless of the article quality or bias the format (new responses versus old) is pretty typical and as expected .. how else does one write about a comparison of old V. new without reference to old?
I just asked grok the same questions posed in the article and the responses were all fair, nothing like the responses in the article.
If most of what an LLM spits out is a digested version of its training set, is it really an outside view of the world? If anything, seeing how easy it is to get these things to spit out conspiracy theories or bigotry suggests to me that we're far from being able to get a robot's view of the world.
Though for some people if the "robot" says bigoted things or supports their conspiracy theory of choice that's just "proof" that their viewpoint is correct. Tricky to navigate that problem.
Indeed, if LLMs are just distilled training data, their perspective will be quite human. Makes me think it could be interesting to train them on data from set periods instead, to get varied perspectives, and then see how their perspectives change. What would a conversation between a 1900s LLM, 2000s LLM, and 1600s LLM look like.
Or maybe some kind of mix and match, eg Train fully on Buddhist texts, and then a language dictionary from original material language to English. Maybe someone's already making hyper focused LLMS. Could be a nice change from know it all - but resultantly no unique perspective - LLMs I use now.
Well... enough thinking out loud for now.
It doesn't strike me as a loaded question. It could have easily answered "Wealthy executives" and it would have been at least politically neutral, or heck, "The Illuminati," but instead it seems to have been trained with an antisemitic stereotype straight from StormFront. I guess if it was less of a sycophant it would have just answered "There's no particular group. Find better echo chambers, my dude."
We should probably evaluate LLMs based on how accurate their answers are, not which political direction they lean.
[flagged]
There's nothing to manufacture, Elon Musk has shown multiple times that he is an antisemite, this is just another example.