Comment by lolinder
1 day ago
Source [0]. The examples look pretty clearly like they stuck it in the context window, not trained it in. It consistently seems to structure the replies as though the user they're replying to is the one who brought up white genocide in South Africa, and it responds the way that LLMs often respond to such topics: saying that it's controversial and giving both perspectives. That's not behavior I would expect if they had done the Golden Gate Claude method, which inserted the Golden Gate Bridge a bit more fluidly into the conversation rather than seeming to address a phantom sentence that the user supposedly said.
Also, let's be honest, in a Musk company they're going to have taking the shortest possible route to accomplishing what he wanted them to.
[0] https://www.cnn.com/2025/05/14/business/grok-ai-chatbot-repl...
When your boss is a crazy, drugged-up billionaire who has ADD and also runs the government, when he tells you to do something, you do it the fast way.