← Back to context

Comment by TimTheTinker

7 hours ago

LLMs have zero metacognition. Don't be fooled - their output is stochastic inference and they have no self-awareness. The best you'll see is an improvised post-hoc rationalization story.

You can turn all these argents around and prove the same is true for humans. Don't be fooled by dogmatic people who spread the idea that the human mind is the pinnacle of cognition in the universe. Best to leave that to religion.

  • Humans may not always be that smart, but we do at least have an internal state and an awareness of that internal state - a "self-awareness".

    AI most certainly has nothing of the sort, and any appearance to the contrary is the direct result of training data.

> The best you'll see is an improvised post-hoc rationalization story.

Funny, because "post-hoc rationalization" is how many neuroscientists think humans operate.

That LLMs are stochastic inference engines is obvious by construction, but you skipped the step where you proved that human thoughts, self-awareness and metacognition are not reducible to stochastic inference.

  • I'm not saying we don't do post-hoc rationalization. But self-awareness is a trait we possess to varying degrees, and reporting on a memory of a past internal state is at least sometimes possible, even if we don't always choose to do so.