← Back to context

Comment by harimau777

2 days ago

Is there any legal obligation for them not to lie about the prompt?

If they lie and any harm comes from it, yes, that increases liability.

  • Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

    I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers