Comment by og_kalu
2 years ago
>We do not normally hallucinate.
Oh yes we do lol. Many experiments show our perception of reality and of cognition is entirely divorced from the reality of what's really going on.
Your brain is making stuff up all the time. Sense data you perceive is partly fabricated. Your memories are partly fabricated. Your decision rationales are post hoc rationalizations more often than not. That is, you don't genuinely know why you make certain decisions or what preferences actually inform them. You just think you do. You can't recreate previous mental states. You are not usually aware. But it is happening.
LLMs are just undoubtedly worse right now.
We don’t hallucinate in such a way / to the extend that it compromises our ability to do our job.
Currently no one will trust a LLM to even run a helpline - that just a lawsuit waiting to happen should the AI hallucinate a “solution” that results in loss of property, injury or death.
>Currently no one will trust a LLM to even run a helpline - that just a lawsuit waiting to happen should the AI hallucinate a “solution” that results in loss of property, injury or death.
I'm not quite sure exactly what you mean by helpline here (general customer service or more specific ?) but assuming the former..
How much power do you think most helplines actually have ? Most are running off pre-written scripts/guidelines with very little in the way of decisional power. There's a reason for that.
Injury or death ? LLM hallucinations are relational. Unless you're speaking to Dr GPT or something to that effect, a response resulting in injury or death isn't happening.
Having worked in the help-line business, I can tell you that many corporations would and do use LLMs for their helpline, and used worse options before.