You don't think Elon went behind her back constantly? You think the next CEO will have more to say? She pretended to be in charge, she got paid, good for her. What are you hoping for. X is a dump, and the sooner it goes away the better for everybody.
There's only one way to stop Elon Musk from doing erratic, value-destroying things like that, and that's to ambush him in the parking lot with a tire iron.
The NYT had already sourced that she was leaving prior to the Grok incident, so they knew it was not the primary reason. Apparently, she has been planning on leaving since the takeover by xAI.
The bot has said fairly horrendous stuff before, which would cross the line for most people. It had not, however, previously called itself 'MechaHitler', advocated the holocaust, or, er, whatever the hell this is: https://bsky.app/profile/whstancil.bsky.social/post/3ltintoe...
It has gone from "crossing the line for most ordinary decent people" to "crossing the line for anyone who doesn't literally jerk off nightly to Mein Kampf", which _is_ a substantive change.
Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant. I could even guess which groups of people are doing it but I will let them take credit and it's not likely actual neo-nazi's, they are too dumb and on too many drugs to manipulate an infobot. These groups like to LARP to piss everyone off and they often succeed. If I am right it is a set of splintered groups formerly referred to generically as The Internet Hate Machine but they have (d)evolved into something worse that even 4chan could not tolerate.
People who don't understand llms think saying don't shy away from making claims that are politically incorrect means it won't PC. In reality saying that just makes things associated with politically incorrect more likely. The /pol/ board is called politically incorrect, the ideas people "call" politically incorrect most of all are not Elon's vague centrist stuff it's the extreme stuff. LLMs just track probable relations between tokens, not meaning, it having this result based on that prompt is obvious.
We have no evidence to suggest that they just made a prompt change and it dialed up the 4chan weights. This repository is a graveyard where a CI bot occasionally makes a text diff, but we have no understanding if it's connected with anything deployed live or not.
The mishap is not the chatbot accidentally getting too extreme and at odds with 'Elon's centrist stuff'. The mishap is the chatbot is too obvious and inept about Musk's intent.
> it's not likely actual neo-nazi's, they are too dumb to manipulate an infobot.
No they are not. There exist brilliant people and monkeybrains across the whole population and thus the political spectrum. The ratios might be different, but I am pretty sure there exist some very smart neo-nazis
There are, but fascism's internal cultural fixtures are more aesthetic than intellectual. It doesn't really attract or foster intellectuals like some radical political movements do, and it shows very clearly in the composition of the "rank and file".
Put plainly, the average neo-Nazi is astonishingly, astonishingly stupid.
Curtis Yarvin’s writing is insufferable and many of his ideas are both bad and effectively Nazism, but clearly he’s very smart (and very eager to prove it).
It sure didn’t seem to take much manipulation from what I saw. “Which 20th century figure would solve our current woes” is pretty mild input to produce “Hitler would solve everything!”
In 1999 there was a perl chatbot called infobot that could be taught factoids, truths, lies. It would learn anything people chatted about on IRC. So I call LLM's infobots.
> Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant.
We don't need a theory that explains how Grok got a fascist slant, we know exactly what happened: Musk promise to remove the "woke" from Grok, and what's left is Nazi. [1]
That LLM is incredibly filtered, just in a different way from others. I suspect by "retraining" the model Elon actually means that they just updated the system prompt, which is exactly what they have done for other hacked in changes like preventing the bot from criticizing Trump/Elon during the election.
No, that's definitely not what happened. For quite a while Grok actually seemed to have a surprisingly left-leaning slant. Then recently Elon started pushing the South African "white genocide" conspiracy theory, and Grok was sloppily updated and started pushing that same conspiracy theory even in unrelated threads. Last week Elon announced another update to Grok, which coincided with this dramatic right-wing swing in Grok's responses. This change cannot be blamed on public interactions like Microsoft's Tay, it's very clearly the result of a deliberate update, whether or not these results were intentional.
As chief, her job is, amongst others, making sure that type of thing doesn’t happen.
Outcomes suggests she failed at that.
Hopefully the next chief will be better.
She was was never the chief, only the chief's main administrator.
"Assistant to the regional manager". [1]
1. https://www.youtube.com/watch?v=wA9kQuWkU7I
Her only true role was to fulfill Musk's silly promise to step down as CEO after a public vote. https://x.com/elonmusk/status/1604617643973124097
You don't think Elon went behind her back constantly? You think the next CEO will have more to say? She pretended to be in charge, she got paid, good for her. What are you hoping for. X is a dump, and the sooner it goes away the better for everybody.
She was CEO of X which was sold to xAI. I'm not sure she had any control over Grok.
Physical restraint is the only thing that would stop him and I imagine he rolls with security so…
There's only one way to stop Elon Musk from doing erratic, value-destroying things like that, and that's to ambush him in the parking lot with a tire iron.
Yaccarino doesn't strike me as the type.
I'm surprised the NYT article does not even mention it.
The NYT had already sourced that she was leaving prior to the Grok incident, so they knew it was not the primary reason. Apparently, she has been planning on leaving since the takeover by xAI.
6mil a year for a job where she has no power why even show up...
Hasn't the bot done that thing before? And she stayed?
The bot has said fairly horrendous stuff before, which would cross the line for most people. It had not, however, previously called itself 'MechaHitler', advocated the holocaust, or, er, whatever the hell this is: https://bsky.app/profile/whstancil.bsky.social/post/3ltintoe...
It has gone from "crossing the line for most ordinary decent people" to "crossing the line for anyone who doesn't literally jerk off nightly to Mein Kampf", which _is_ a substantive change.
It turns out bluesky is useful after all, as an ad hoc archive of X. Xd
Not at this level, no.
What is the Nazi chatbot?
Grok, the xAI chatbot, went full neo-nazi yesterday:
https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...
[flagged]
17 replies →
https://news.ycombinator.com/item?id=44504709 ("Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts"—16 hours ago; 89 comments)
"Weirdly" always gets flagged almost immediately even though it's quite tech relevant.
3 replies →
grok yesterday.
[flagged]
3 replies →
Related discussions from the past 12 hrs for those catching up:
Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts
https://news.ycombinator.com/item?id=44507419
see here https://news.ycombinator.com/item?id=44510635
[flagged]
Yeah that's not even close to what's going on here. Grok is literally bringing up Hitler in unrelated topics.
https://bsky.app/profile/percyyabysshe.bsky.social/post/3lti...
2 replies →
Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant. I could even guess which groups of people are doing it but I will let them take credit and it's not likely actual neo-nazi's, they are too dumb and on too many drugs to manipulate an infobot. These groups like to LARP to piss everyone off and they often succeed. If I am right it is a set of splintered groups formerly referred to generically as The Internet Hate Machine but they have (d)evolved into something worse that even 4chan could not tolerate.
It's just the prompt: https://github.com/xai-org/grok-prompts/commit/c5de4a14feb50...
People who don't understand llms think saying don't shy away from making claims that are politically incorrect means it won't PC. In reality saying that just makes things associated with politically incorrect more likely. The /pol/ board is called politically incorrect, the ideas people "call" politically incorrect most of all are not Elon's vague centrist stuff it's the extreme stuff. LLMs just track probable relations between tokens, not meaning, it having this result based on that prompt is obvious.
We have no evidence to suggest that they just made a prompt change and it dialed up the 4chan weights. This repository is a graveyard where a CI bot occasionally makes a text diff, but we have no understanding if it's connected with anything deployed live or not.
The mishap is not the chatbot accidentally getting too extreme and at odds with 'Elon's centrist stuff'. The mishap is the chatbot is too obvious and inept about Musk's intent.
it's almost like Grok takes "politically incorrect" to be synonymous with racist.
> it's not likely actual neo-nazi's, they are too dumb to manipulate an infobot.
No they are not. There exist brilliant people and monkeybrains across the whole population and thus the political spectrum. The ratios might be different, but I am pretty sure there exist some very smart neo-nazis
There are, but fascism's internal cultural fixtures are more aesthetic than intellectual. It doesn't really attract or foster intellectuals like some radical political movements do, and it shows very clearly in the composition of the "rank and file".
Put plainly, the average neo-Nazi is astonishingly, astonishingly stupid.
1 reply →
Curtis Yarvin’s writing is insufferable and many of his ideas are both bad and effectively Nazism, but clearly he’s very smart (and very eager to prove it).
2 replies →
It sure didn’t seem to take much manipulation from what I saw. “Which 20th century figure would solve our current woes” is pretty mild input to produce “Hitler would solve everything!”
I'm out of the loop, why is it an "infobot" and not a chatbot?
In 1999 there was a perl chatbot called infobot that could be taught factoids, truths, lies. It would learn anything people chatted about on IRC. So I call LLM's infobots.
1 reply →
> Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant.
We don't need a theory that explains how Grok got a fascist slant, we know exactly what happened: Musk promise to remove the "woke" from Grok, and what's left is Nazi. [1]
[1] https://amp.cnn.com/cnn/2025/07/08/tech/grok-ai-antisemitism
> we know exactly what happened
The price of certainty is inaccuracy.
3 replies →
That LLM is incredibly filtered, just in a different way from others. I suspect by "retraining" the model Elon actually means that they just updated the system prompt, which is exactly what they have done for other hacked in changes like preventing the bot from criticizing Trump/Elon during the election.
No, that's definitely not what happened. For quite a while Grok actually seemed to have a surprisingly left-leaning slant. Then recently Elon started pushing the South African "white genocide" conspiracy theory, and Grok was sloppily updated and started pushing that same conspiracy theory even in unrelated threads. Last week Elon announced another update to Grok, which coincided with this dramatic right-wing swing in Grok's responses. This change cannot be blamed on public interactions like Microsoft's Tay, it's very clearly the result of a deliberate update, whether or not these results were intentional.