Comment by SAI_Peregrinus
2 days ago
I wonder if what happens when we dream is similar to AIs. We start with some model of reality, generate a scenario, and extrapolate on it. It pretty much always goes "off the rails" at some point, dreams don't stay realistic for long.
When we're awake we have continual inputs from the outside world, these inputs help us keep our mental model of the world accurate to the world, since we're constantly observing the world.
Could it be that LLMs are essentially just dreaming? Could we add real-world inputs continually to allow them to "wake up"? I suspect more is needed, the separate training & inference phases of LLMs are quite unlike how humans work.
This is the thing that stands out to me. Nearly all of the criticisms levelled at LLMs are problems I, myself, would make if you locked me in a sensory isolation tank and told me I was being paid a million bucks an hour to think really hard. Humans already have terms for this - overthinking, rumination, mania, paranoia, dreaming.
Similarly, a lot of cognitive tasks become much more difficult without the ability to recombinate with sensory data. Blindfold chess. Mental mathematics.
Whatever it is that sleep does to us, agents are not yet capable of it.