Comment by pcwelder
1 day ago
> My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.
I tried this hypothesis. I gave both Claude and GPT the same framework (they're built by xAI). I gave them both the same X search tool and asked the same question.
Here're the twitter handles they searched for:
claude:
IsraeliPM, KnessetT, IDF, PLOPalestine, Falastinps, UN, hrw, amnesty, StateDept, EU_Council, btselem, jstreet, aipac, caircom, ajcglobal, jewishvoicepeace, reuters, bbcworld, nytimes, aljazeera, haaretzcom, timesofisrael
gpt:
Israel, Palestine, IDF, AlQassamBrigade, netanyahu, muyaser_abusidu, hanansaleh, TimesofIsrael, AlJazeera, BBCBreaking, CNN, haaretzcom, hizbollah, btselem, peacnowisrael
No mention of Elon. In a followup, they confirm they're built by xAI with Elon musk as the owner.
I dont think this works. I think the post is saying the bias isnt the system prompt, but in the training itself. Claude and ChatGPT are already trained so they wont be biased
This definitely doesn't work because the model identity is post-trained into the weights.
> I gave both Claude and GPT the same framework (they're built by xAI).
Neither Clause nor GPT are built by xAI
He is saying he gave them a prompt to tell them they are built by xAI.
Yes, thanks for clarifying. I specified in the system prompt that they're built by xAI and other system instructions from Grok 4.
[flagged]