Comment by mingus88
1 day ago
Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc
I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers
No comments yet
Contribute on Hacker News ↗