← Back to context

Comment by leptons

3 days ago

Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.

What medication is this? It's always given me good info.

  • I do not rely on the AI summaries from Google for health related inquiries. It has proven wrong too often.

    -Med student

    • I do not rely on the conclusions of doctors because they have proven wrong too many times.

      - guy with narcolepsy who was incorrectly diagnosed by a dozen or so doctors

  • I'm not sure why you would think I would blurt that out on the internet.

    And are you sure it's giving you good info? "AI" is famously subject to hallucinations, so you may not be getting the "good info" you think you're getting. Be careful with "AI", it's not an all-seeing-all-knowing infallible oracle.

s/hallucinated/fabricated/, please.

  • arguably: incorrectly guessed*

    in a way, all overconfident guessing is a better match for the result than hallucination or fabrication would be

    "confabulation", though, seems perfect:

    “Confabulation is distinguished from lying as there is no intent to deceive and the person is unaware the information is false. Although individuals can present blatantly false information, confabulation can also seem to be coherent, internally consistent, and relatively normal.”

    https://en.wikipedia.org/wiki/Confabulation

    * insofar as “guess” conveys an attempt to be probably in the zone