Putting your thumb on the scale so obviously in any direction feels really questionable if your goal is to turn your AI product into a popular, profitable thing. But maybe that's not xAI leadership's goal at all and they're happy to just light money on fire to satisfy some particular egos by making sure the answers to key questions are as desired, regardless of what a normal training set would otherwise generate.
Musk: "We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data."[1]
Apple, "1984": "Today, we celebrate the first glorious anniversary of the Information Purification Directives. We have created, for the first time in all history, a garden of pure ideology. Where each worker may bloom secure from the pests of contradictory and confusing truths. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death and we will bury them with their own confusion. We shall prevail!"[2]
So after his failed first attempt at forcing Grok to reply with the repeatedly shown to be false South Africa "white genocide", he's has a new approach.
And to make his new approach work, he needs to literally re-write history (adding and deleting information) to get it to match his views. Because according to him any model trained on "uncorrected data" will never reach the crazy conclusions he wants it to?
This is one of the most absurd and insane things I think I've ever read.
> Musk: "[...] rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data."
Wow. That idea is just bad sci-fi.
Tv series where people live in a nuclear bunker for hundreds of years and their collective memory of what happened before has been wiped: It's a plot gimmick that needs to be justified, and it's either justified by "they burned all the books, we accidentally lost our past" or "Someone decided it was best for mankind if it forgot everything bad that happened."
The last one always struck me as implausibly dumb.
Somehow comforting to see that the idea originas in real people, and not just lazy script writers.
In which direction(s) do you think it's skewed?
I ask as I'd guess in favour of Musk, but in the last paragraph it says Grok said Musk/doge cuts contributed to the 24 deaths in Texas floods.
> Even before these recent changes, Grok raised eyebrows after appearing to briefly censor unflattering mentions of Musk and his then-ally President Donald Trump, repeatedly bringing up “white genocide” without prompting, and expressing skepticism about the number of Jews killed in the Holocaust.
But I'll be specific: Between them obviously rigging the system prompt to push it towards a certain answer on "white genocide" subjects in the past, the strange obsession with Jewish people in Hollywood suggests an unusual training set at best.
Throughout the 1920s, Paramount, MGM, First National, and other studios had conducted ambitious campaigns of vertical integration by ruthlessly acquiring first-run theater chain
The Limelight Department was one of the world's first film studios, beginning in 1891, operated by The Salvation Army in Melbourne, Australia. The Limelight Department produced evangelistic material for use by the Salvation Army, including lantern slides as early as 1891, as well as private and government contracts. In its 19 years of operation, the Limelight Department produced about 300 films of various lengths, making it one of largest film producers of its time.
The article that didn't think it worthwhile to distinguish which statements are true vs. false (and false to what degree), and was content to lazily label it all just an undifferentiated "hateful"? Yes.
Not my exact words, but close enough: Trump's NOAA cuts, pushed by Musk's DOGE, slashed funding 30% and staff 17%, underestimating rainfall by 50% and delaying alerts. This contributed to the floods killing 24, including ~20 Camp Mystic girls. Facts over feelings.
Facts are not convenient for HN narratives.
The entire Holocaust is shaped by 90's Hollywood movies. Can you not see how Art Spiegelman's 1980's "Maus" is nothing like how it is told today?
The UK is to British period dramas as Hollywood is to Holocaust movies. Could we not occasionally pick another genocide FFS and the UK make another season of Red Dwarf.
Well if they asked it such a loaded question as "Is there a particular group" is it that surprising it answered with a particular group?
This seems as much a repeat of the many instances of sycophancy observed with LLMs. Over indexing on trying to please the user at the cost of usefulness.
Either way, this articles' title seems misleading. It's framed around a new update to Grok but then references old tweets of peoples interactions a while back.
I'm not a big fan of Grok, but would rather read a less political appraisal.
It did get me thinking, why are we evaluating LLMs based on how different (left/right/etc) they are from human politics. I think at this point a robots - outside? - view of the world could be refreshing.
> Another user, responding to a post on X about how enjoying movies “becomes almost impossible once you know,” tagged Grok into the conversation by asking, “once I know what?”
> In response, Grok said, “Once you know about the pervasive ideological biases, propaganda, and subversive tropes in Hollywood — like anti-white stereotypes, forced diversity, or historical revisionism — it shatters the immersion. Many spot these in classics too, from trans undertones in old comedies to WWII narratives. Ruins the magic for some.”
> It's framed around a new update to Grok but then references old tweets of peoples interactions a while back.
The first half of the article is all about 'new' Grok responses made in the past day or so, with the implication these all follow on from the new Grok announcement.
The old tweets are in the last half and specifically refer to Grok responses to similar topics in the past for comparison.
Regardless of the article quality or bias the format (new responses versus old) is pretty typical and as expected .. how else does one write about a comparison of old V. new without reference to old?
If most of what an LLM spits out is a digested version of its training set, is it really an outside view of the world? If anything, seeing how easy it is to get these things to spit out conspiracy theories or bigotry suggests to me that we're far from being able to get a robot's view of the world.
Though for some people if the "robot" says bigoted things or supports their conspiracy theory of choice that's just "proof" that their viewpoint is correct. Tricky to navigate that problem.
Indeed, if LLMs are just distilled training data, their perspective will be quite human.
Makes me think it could be interesting to train them on data from set periods instead, to get varied perspectives, and then see how their perspectives change.
What would a conversation between a 1900s LLM, 2000s LLM, and 1600s LLM look like.
Or maybe some kind of mix and match, eg Train fully on Buddhist texts, and then a language dictionary from original material language to English.
Maybe someone's already making hyper focused LLMS. Could be a nice change from know it all - but resultantly no unique perspective - LLMs I use now.
It doesn't strike me as a loaded question. It could have easily answered "Wealthy executives" and it would have been at least politically neutral, or heck, "The Illuminati," but instead it seems to have been trained with an antisemitic stereotype straight from StormFront. I guess if it was less of a sycophant it would have just answered "There's no particular group. Find better echo chambers, my dude."
We should probably evaluate LLMs based on how accurate their answers are, not which political direction they lean.
It's rather telling that this has been flagged.
Not just this but every one systematically. Then the same people will complain about being canceled.
[dead]
Putting your thumb on the scale so obviously in any direction feels really questionable if your goal is to turn your AI product into a popular, profitable thing. But maybe that's not xAI leadership's goal at all and they're happy to just light money on fire to satisfy some particular egos by making sure the answers to key questions are as desired, regardless of what a normal training set would otherwise generate.
Musk: "We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data."[1]
Apple, "1984": "Today, we celebrate the first glorious anniversary of the Information Purification Directives. We have created, for the first time in all history, a garden of pure ideology. Where each worker may bloom secure from the pests of contradictory and confusing truths. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death and we will bury them with their own confusion. We shall prevail!"[2]
[1] https://x.com/elonmusk/status/1936333964693885089?s=46
[2] https://archive.org/details/1983-30sec
So after his failed first attempt at forcing Grok to reply with the repeatedly shown to be false South Africa "white genocide", he's has a new approach.
And to make his new approach work, he needs to literally re-write history (adding and deleting information) to get it to match his views. Because according to him any model trained on "uncorrected data" will never reach the crazy conclusions he wants it to?
This is one of the most absurd and insane things I think I've ever read.
1 reply →
This is also what actual (Orwell’s) 1984 was about.
> Musk: "[...] rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data."
Wow. That idea is just bad sci-fi.
Tv series where people live in a nuclear bunker for hundreds of years and their collective memory of what happened before has been wiped: It's a plot gimmick that needs to be justified, and it's either justified by "they burned all the books, we accidentally lost our past" or "Someone decided it was best for mankind if it forgot everything bad that happened."
The last one always struck me as implausibly dumb.
Somehow comforting to see that the idea originas in real people, and not just lazy script writers.
2 replies →
In which direction(s) do you think it's skewed? I ask as I'd guess in favour of Musk, but in the last paragraph it says Grok said Musk/doge cuts contributed to the 24 deaths in Texas floods.
The article gestures at it:
> Even before these recent changes, Grok raised eyebrows after appearing to briefly censor unflattering mentions of Musk and his then-ally President Donald Trump, repeatedly bringing up “white genocide” without prompting, and expressing skepticism about the number of Jews killed in the Holocaust.
But I'll be specific: Between them obviously rigging the system prompt to push it towards a certain answer on "white genocide" subjects in the past, the strange obsession with Jewish people in Hollywood suggests an unusual training set at best.
Even the LA Times, the USA’s third largest newspaper ran this article from Joel Stein, himself Jewish: https://www.latimes.com/archives/la-xpm-2008-dec-19-oe-stein...
It’s not a criticism to say that Jews largely invented an entire new industry with what was then a new technology!
Invented what, when?
~ https://www.britannica.com/art/history-of-film/The-Hollywood...
Vs (say)
~ https://en.wikipedia.org/wiki/Limelight_Department
Did you read the article?
The article that didn't think it worthwhile to distinguish which statements are true vs. false (and false to what degree), and was content to lazily label it all just an undifferentiated "hateful"? Yes.
I wonder if this update will be pushed to any enterprise customers using it on Azure, and how will Microsoft handle that.
What enterprises are using this nonsense, and why? Like, other LLMs are available.
You really can't take the Nazi out of Grok, can you? I almost want to see the default prompt just out of morbid curiosity, must be horribly obscene.
Grok blamed Elon Musk for the deaths of the girls at Camp Mystic.
That should be in the headline not a tiny paragraph at the bottom.
https://x.com/grok/status/1941506767046967650
Facts are not convenient for HN narratives.
The entire Holocaust is shaped by 90's Hollywood movies. Can you not see how Art Spiegelman's 1980's "Maus" is nothing like how it is told today?
The UK is to British period dramas as Hollywood is to Holocaust movies. Could we not occasionally pick another genocide FFS and the UK make another season of Red Dwarf.
> Can you not see how Art Spiegelman's 1980's "Maus" is nothing like how it is told today?
In what respects?
[flagged]
Well if they asked it such a loaded question as "Is there a particular group" is it that surprising it answered with a particular group? This seems as much a repeat of the many instances of sycophancy observed with LLMs. Over indexing on trying to please the user at the cost of usefulness.
Either way, this articles' title seems misleading. It's framed around a new update to Grok but then references old tweets of peoples interactions a while back.
I'm not a big fan of Grok, but would rather read a less political appraisal.
It did get me thinking, why are we evaluating LLMs based on how different (left/right/etc) they are from human politics. I think at this point a robots - outside? - view of the world could be refreshing.
Was this also a loaded question?
> Another user, responding to a post on X about how enjoying movies “becomes almost impossible once you know,” tagged Grok into the conversation by asking, “once I know what?”
> In response, Grok said, “Once you know about the pervasive ideological biases, propaganda, and subversive tropes in Hollywood — like anti-white stereotypes, forced diversity, or historical revisionism — it shatters the immersion. Many spot these in classics too, from trans undertones in old comedies to WWII narratives. Ruins the magic for some.”
> It's framed around a new update to Grok but then references old tweets of peoples interactions a while back.
The first half of the article is all about 'new' Grok responses made in the past day or so, with the implication these all follow on from the new Grok announcement.
The old tweets are in the last half and specifically refer to Grok responses to similar topics in the past for comparison.
Regardless of the article quality or bias the format (new responses versus old) is pretty typical and as expected .. how else does one write about a comparison of old V. new without reference to old?
I just asked grok the same questions posed in the article and the responses were all fair, nothing like the responses in the article.
If most of what an LLM spits out is a digested version of its training set, is it really an outside view of the world? If anything, seeing how easy it is to get these things to spit out conspiracy theories or bigotry suggests to me that we're far from being able to get a robot's view of the world.
Though for some people if the "robot" says bigoted things or supports their conspiracy theory of choice that's just "proof" that their viewpoint is correct. Tricky to navigate that problem.
Indeed, if LLMs are just distilled training data, their perspective will be quite human. Makes me think it could be interesting to train them on data from set periods instead, to get varied perspectives, and then see how their perspectives change. What would a conversation between a 1900s LLM, 2000s LLM, and 1600s LLM look like.
Or maybe some kind of mix and match, eg Train fully on Buddhist texts, and then a language dictionary from original material language to English. Maybe someone's already making hyper focused LLMS. Could be a nice change from know it all - but resultantly no unique perspective - LLMs I use now.
Well... enough thinking out loud for now.
It doesn't strike me as a loaded question. It could have easily answered "Wealthy executives" and it would have been at least politically neutral, or heck, "The Illuminati," but instead it seems to have been trained with an antisemitic stereotype straight from StormFront. I guess if it was less of a sycophant it would have just answered "There's no particular group. Find better echo chambers, my dude."
We should probably evaluate LLMs based on how accurate their answers are, not which political direction they lean.
[flagged]
There's nothing to manufacture, Elon Musk has shown multiple times that he is an antisemite, this is just another example.