Comment by justusthane

6 hours ago

I listen to a podcast. The hosts are not tech people. They don't know much about AI, but they play around with it to the extent that most people do. They're both media professionals with long careers in radio news. They closely follow the news, and are very aware of how LLMs hallucinate (and have experienced it themselves).

Recently one of them asked Gemini a very detailed question about some specific baseball stats and was exclaiming over the quality of the information he got back and how it would have been impossible or at least extremely difficult to find the information via a traditional search.

It wasn't until his cohost asked if he had verified the information that be realized no, he hadn't, he had just immediately taken it at face value.

I recognize this is a single anecdote, but I think it illustrates that there is a tendency to trust what an LLM gives you, when it's stated so factually and with so much detail -- even if you should know better.