← Back to context

Comment by gwbas1c

14 hours ago

> I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.

Guess what?

Books in the library can be wrong, even peer-reviewed encyclopedias.

Pages on the internet can be wrong, even Wikipedia.

When accuracy is important, you must look at multiple sources. I think AI will get better at providing accurate information, but only a fool relies on a single information source for critical decisions.

Yes LLM text prediction and peer-reviewed encyclopedias are the same. Good on you throwing internet pages in there too, that brings balance or something

  • My understanding of the parent is more charitable: If your thinking process relies on being told only the truth, you are going to fare lousy in this world.

    LLMs are an example, but so are random pages on the internet, a buch of stuff we get served by the media (mainstream or otherwise), "expert opinions" by biased or sponsored experts or experts in a different field, etc, etc.

    As the popular quip goes: It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

    With LLMs, we actually do get the warnings: Here's the ChatGPT footer: ChatGPT can make mistakes. Check important info. For Claude: Claude is AI and can make mistakes. Please double-check responses.

    Such disclaimers, if written, are usually hidden deeply in terms of use for a random website, not stated up front.

>I think AI will get better at providing accurate information

I think AI will get better at providing multiple sources.