Comment by TeMPOraL

1 year ago

My thinking is that LLMs are very similar, perhaps structurally the same, as a piece of human brain that does the "inner voice" thing. The boundary between the subconscious and conscious, that generates words and phrases and narratives pretty much like "feels best" autocomplete[0] - bits that other parts of your mind evaluate and discard, or circle back, because if you were just to say or type directly what your inner voice says, you'd sound like... a bad LLM.

In my own experience, when I'm asked a question, my inner voice starts giving answers immediately, following associations and what "feels right"; the result is eerily similar to LLMs, particularly when they're hallucinating. The difference is, you see the immediate output of an LLM; with a person, you see/hear what they choose to communicate after doing some mental back-and-forth.

So I'm not saying LLMs are thinking - mostly for the trivial reason of them being exposed through low-level API, without built-in internal feedback loop. But I am saying they're performing the same kind of thing my inner voice does, and at least in my case, my inner voice does 90% of my "thinking" day-to-day.

--

[0] - In fact, many years before LLMs were a thing, I independently started describing my inner narrative as a glorified Markov chain, and later discovered it's not an uncommon thing.

Interesting perspective, thanks. I can’t help but feel they are still missing a major part of cognition though which is having a stable model of the world.