Comment by worrycue
2 years ago
We don’t hallucinate in such a way / to the extend that it compromises our ability to do our job.
Currently no one will trust a LLM to even run a helpline - that just a lawsuit waiting to happen should the AI hallucinate a “solution” that results in loss of property, injury or death.
>Currently no one will trust a LLM to even run a helpline - that just a lawsuit waiting to happen should the AI hallucinate a “solution” that results in loss of property, injury or death.
I'm not quite sure exactly what you mean by helpline here (general customer service or more specific ?) but assuming the former..
How much power do you think most helplines actually have ? Most are running off pre-written scripts/guidelines with very little in the way of decisional power. There's a reason for that.
Injury or death ? LLM hallucinations are relational. Unless you're speaking to Dr GPT or something to that effect, a response resulting in injury or death isn't happening.
Having worked in the help-line business, I can tell you that many corporations would and do use LLMs for their helpline, and used worse options before.