Comment by derektank

9 hours ago

I wouldn’t say they have an undefined truth value. Their source of truth is their training data. The problem is that human text is not tightly coupled to the capital T truth.

Nor is the LLM output tightly coupled to the training data. They'll "eagerly"[1] fill in the blanks wherever it sounds good.

[1] here I don't mean to imply agency, just vigor.