← Back to context

Comment by JumpCrisscross

2 days ago

If they lie and any harm comes from it, yes, that increases liability.

Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers