Comment by famouswaffles
6 days ago
Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different.
To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts.
Human babies "train" their brain on literally gigabytes of multi-modal data dumped on them through all their sensory organs every second.
In a very real sense, our magic superpower is that we "giga-scale" with such low resource consumption, especially considering how large (in terms of parameters) the brain is compared to even the most advanced models we have running on those thousands of GPUs today. But that's where all those millions of years of evolution pay off. Don't diss the wetware!
And then an 18-to-20-something-year training run is required for each individual instance.
I know right, such a waste. Plus it's so random on how they will turn out!
Any suggestions on how to reduce that waste?
Do you think evolutionary pressures are the best explanation for why humans were able to posit the Poincaré conjecture and solve it? While our mental architecture evolved over a very long time, we still learn from miniscule amounts of data compared to LLMs.
Yeah. What else would it be ? A brain capable of doing that was clearly the result of evolutionary pressures.
But there is no evolutionary pressure for the Poincaré conjecture, we were never optimized for that in particular, unlike these kinds of LLMs.
2 replies →
How is that relevant? The human brain is at the point of birth (or some time before that). We compare that with an LLM model doing inference. The training part is irrelevant, the same way the human brains' evolution is.