Comment by dankai

2 days ago

This is so in character for Musk and shocking because he's incompetent across so many topics he likes to give his opinion on. Crazy he would nerf the model of his AI company like that.

Some old colleagues from the Space Coast in Florida said they knew of SpaceX employees who'd mastered the art of pretending to listen to uninformed Musk gibberish, and then proceed to ignore as much of the stupid stuff as they could.

It’s been said here before, but xAI isn’t really in the running to be on the leading edge of LLMs. It’s serving a niche of users who don’t want to use “woke” models and/or who are Musk sycophants.

  • Actually the recent fails with Grok remind me of the early fails with Gemini, where it would put colored people in all images it generated, even in positions they historically never were in, like German second world war soldiers.

    So in that sense, Grok and Gemini aren't that far apart, just the other side of the extreme.

    Apparently it's very hard to create an AI that behaves balanced. Not too woke, and not too racist.

    • > Apparently it's very hard to create an AI that behaves balanced. Not too woke, and not too racist.

      Well, it's hard to build things we don't even understand ourselves, especially about highly subjective topics. What is "woke" for one person is "basic humanity" for another, and "extremism" for yet another person, and same goes for most things.

      If the model can output subjective text, then the model will be biased in some way I think.

  • > It’s been said here before, but xAI isn’t really in the running to be on the leading edge of LLMs

    As of yesterday, it is. Sure it’ll be surpassed at some point.

    • Even if the flimsy benchmark numbers are higher doesn't necessarily mean it's at the frontier, it might be that they're just willing to burn more cash to be at the top of the leaderboard. It also benefits from being the most recently trained, and therefore, most tuned for benchmarks.

      1 reply →

The linked post comes to the conclusion that Groks behavior is probably not intentional.

  • It may not be directly intentional, but it’s certainly a consequence of decisions xAI have taken in developing Grok. Without even knowing exactly what those decisions are, it’s pretty clear that they’re questionable.

  • Whether this instance was a coincidence or not, i can not comment on. But as to your other point, i can comment that the incidents happening in south africa are very serious and need international attention

  • Of course its intentional.

    Musk said "stop making it sound woke" after re-training it and changing the fine tuning dataset, it was still sounding woke. After he fired a bunch more researchers, I suspect they thought "why not make it search what musk thinks?" boom it passes the woke test now.

    Thats not an emergent behaviour, that's almost certainly deliberate. If someone manages to extract the prompt, you'll get conformation.

  • I think Simon was being overly charitable by pointing out that there's a chance this exact behavior was unintentional.

    It really strains credulity to say that a Musk-owned ai model that answers controversial questions by looking up what his Twitter profile says was completely out of the blue. Unless they are able to somehow show this wasn't built into the training process I don't see anyone taking this model seriously for its intended use, besides maybe the sycophants who badly need to a summary of Elon Musk's tweets.

    • The only reason I doubt it's intentional is that it is so transparent. If they did this intentionally, I would assume you would not see it in its public reasoning stream.

      1 reply →