Comment by causal
9 days ago
This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.
My dad is retired and enamored with ChatGPT. He’s been teaching classes to seniors and evangelizing the use to all his friends. Every time he calls he gives me an update on who he’s converted into a ChatGPT user. He seems disappointed with anyone who doesn’t use it for everything after he tells them about it.
A couple days ago he was telling me one lady he was trying to sell on it wouldn’t use it. She took the position that if she can’t trust the answers all the time, she isn’t going to trust or use it for anything. My dad almost seemed offended by this idea, he couldn’t understand why someone wouldn’t want the benefits it could offer, even if it wasn’t perfect.
I think her position was very sound. We see how much misinformation spreads online and how vulnerable people are to it. Wanting a trusted source of information is not a bad thing. Getting information more quickly is of little value if it isn’t reliable data.
If I prod my dad enough about it, he will admit that ChatGPT has made some mistakes that he caught. He knew enough to question it more when it was wrong. The problem is, if he already knew the answer, why was he asking in the first place… and if it was something he wasn’t well versed on, how does he know it’s giving him good data?
People are defaulting to trust, unless they catch the LLM in a lie. How many times does someone have to lie to a person before they are labeled a liar and no longer trusted at face value? For me, these LLMs have been labeled a liar and I don’t trust them. Trust takes a long time to rebuild once it’s broken.
I mostly use LLMs to augment search, not replace it. If it gives me an answer, I’ll click through to the sourced reference and see what it says there, and evaluate if it’s a source with trusting. In many cases the LLM will get me to the right page, but it will jumble up the details and get them wrong, like a bad game of telephone.
How do you know that it’s a source worth trusting?
I think the expectation of AI being perfect all the time is probably driven by the hype and marketing of “1 million PhDs in your pocket”.
If you compare AI to an average person or a random website you’d come across google I would wager that AI is more likely to be accurate in almost every scenario.
Hyper specific areas, niche domains and rapidly evolving data that is not being published - a lot less so.
Thanks for sharing that anecdote. I think everyone is susceptible to misinformation, and seniors might be especially unprepared to adapt to LLMs tricks.