Comment by coppsilgold
3 days ago
Is a brain not a token prediction machine?
Tokens in form of neural impulses go in, tokens in the form of neural impulses go out.
We would like to believe that there is something profound happening inside and we call that consciousness. Unfortunately when reading about split-brain patient experiments or agenesis of the corpus callosum cases I feel like we are all deceived, every moment of every day. I came to realization that the confabulation that is observed is just a more pronounced effect of the normal.
Could an LLM trained on nothing and looped upon itself eventually develop language, more complex concepts, and everything else, based on nothing? If you loop LLMs on each other, training them so they "learn" over time, will they eventually form and develop new concepts, cultures, and languages organically over time? I don't have an answer to that question, but I strongly doubt it.
There's clearly more going on in the human mind than just token prediction.
If you come up with a genetic algorithm scaffolding to affect both the architecture and the training algorithm, and then you instantiate it in an artificial selection environment, and you also give it trillions generations to evolve evolvability just right (as life had for billions of years) then the answer is yes, I'm certain it will and probably much sooner than we did.
Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia). Finding this set of weights is the problem.
I'm certain it wouldn't, and you're certain it would, and we have the same amount of evidence (and probably roughly the same means for running such an expensive experiment). I think they're more likely to go slowly mad, degrading their reasoning to nothing useful rather than building something real, but that could be different if they weren't detached from sensory input. Human minds looping for generations without senses, a world, or bodies might also go the same way.
> Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia).
I don't see why that would be the case at all, and I regularly use the latest and most expensive LLMs and am aware enough of how they work to implement them on the simplest level myself, so it's not just me being uninformed or ignorant.
1 reply →
> Is a brain not a token prediction machine?
I would say that, token prediction is one of the things a brain does. And in a lot of people, most of what it does. But I dont think its the whole story. Possibly it is the whole story since the development of language.
We know that consciousness exists because we constantly experience it. It’s really the only thing we can ever know with certainty.
That’s the point of “I think therefore I am.”
You know that your own consciousness exists, that's where certainty ends. The rest of us might just pretend. :)