Comment by Shorel
2 days ago
By analogy with human brains: Because our own brains are far more than the Broca's areas in them.
Evolution selects for efficiency.
If token prediction could work for everything, our brains would also do nothing else but token prediction. Even the brains of fishes and insects would work like that.
The human brain has dedicated clusters of neurons for several different cognitive abilities, including face recognition, line detection, body parts self perception, 3D spatial orientation, and so on.
> Evolution selects for efficiency.
I think this is a poor argument here. From an evolutionary point of view, our brains are optimized to:
- Provide fine-motor control to craft weapons and tools (enhancing adaptibility and enabling us to hunt way outside our weight class)
- Communicate/coordinate effectively in small groups
- Do sensor processing and the above with a low energy budget
Our brains are *not* selected to be minimum-complexity intelligences, and a lot of what our brain does is completely useless for AGI building (motor control, sensor processing, ...).
Furthermore, the cost/complexity (from a evolutionary PoV) is a totally different beast from what complexity means to us.
Just consider flight as an example: A fruitfly is an insanely simple and straightforward beast, but to us, a biochemically fuelled, beating-wing design is still infeasibly complicated. If our approach to flight had been to ape after how nature does it in detail, we likely still would not have planes.
I do agree that todays LLMs still have clear architectural flaws that we need to overcome (online learning being a very glaring one), but, to pick up the flight analogy, we might well have the main wing structure already down, and we won't necessarily have to make those wings beat to get into the air...
Just because there are some parts of our brains that are not needed for an AGI...
Doesn't mean that there aren't some part of our brains that are needed for an AGI, and are not present in the current crop of LLM.
What do our brains do that isn't token prediction?
They receive information about photons and air vibrations and control muscles, okay. If a human brain was hooked up the way ChatGPT was, only to text input and output, would that make it not intelligent?
> What do our brains do that isn't token prediction?
I am planning a masters and phd on that question, so give me a few years to answer.