Comment by freedomben

2 days ago

> “We have removed that deprecated code and refactored the entire system to prevent further abuse. The new system prompt for the @grok bot will be published to our public github repo,” the statement said.

Love them or hate them, or somewhere in between, I do appreciate this transparency.

It’s a kinda meaningless statement, tbh.

Pull requests to delete dead code or refactor are super common. It’s maintenance. Bravo.

What was actually changed, I wonder?

And the system prompt is imporant and good for publishing it, but clearly the issue is the training data and the compliance with user prompts that made it a troll bot.

So should we expect anything different moving forward? I’m not. Musk’s character has not changed and he remains the driving force behind both companies

If they don’t the prompt will just get leaked by someone manipulating grok itself hours from being released, and then picked apart and criticized. It’s not about transparency but about claiming to be transparent to save face.

Is there any legal obligation for them not to lie about the prompt?

  • If they lie and any harm comes from it, yes, that increases liability.

    • Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

      I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers