Comment by kristjansson
11 hours ago
> if you just prompt the AI and believe what it tell you then you have AI psychosis
This is the right definition. LLM outputs have undefined truth value. They’re mechanized Frankfurtian Bullshiters. Which can be valuable! If you have the tools or taste to filter the things that happen to be true from the rest of the dross.
However! We need a nicer word for it. Suggesting someone has “AI psychosis” feels a bit too impolitic.
Maybe we reclaim “toked out” from our misspent youths?
e.g. “This piece feels a little toked out. Let’s verify a few of Claude’s claims”
“Toked out” is really, really good, thank you for this
I wouldn’t say they have an undefined truth value. Their source of truth is their training data. The problem is that human text is not tightly coupled to the capital T truth.
Nor is the LLM output tightly coupled to the training data. They'll "eagerly"[1] fill in the blanks wherever it sounds good.
[1] here I don't mean to imply agency, just vigor.