Comment by observationist
2 days ago
It's fascinating when you look at each technical component of cognition in human brains and contrast against LLMs. In humans, we have all sorts of parallel asynchronous processes running, with prediction of columnar activations seemingly the fundamental local function, with tens of thousands of mini columns and regions in the brain corresponding to millions of networked neurons using the "predict which column fires next" objective to increment or decrement the relative contribution of any functional unit.
In the case of LLMs you run into similarities, but they're much more monolithic networks, so the aggregate activations are going to scan across billions of neurons each pass. The sub-networks you can select each pass by looking at a threshold of activations resemble the diverse set of semantic clusters in bio brains - there's a convergent mechanism in how LLMs structure their model of the world and how brains model the world.
This shouldn't be surprising - transformer networks are designed to learn the complex representations of the underlying causes that bring about things like human generated text, audio, and video.
If you modeled a star with a large transformer model, you would end up with semantic structures and representations that correlate to complex dynamic systems within the star. If you model slug cellular growth, you'll get structure and semantics corresponding to slug DNA. Transformers aren't the end-all solution - the paradigm is missing a level of abstraction that fully generalizes across all domains, but it's a really good way to elicit complex functions from sophisticated systems, and by contrasting the way in which those models fail against the way natural systems operate, we'll find better, more general methods and architectures, until we cross the threshold of fully general algorithms.
Biological brains are a computational substrate - we exist as brains in bone vats, connected to a wonderfully complex and sophisticated sensor suite and mobility platform that feeds electrically activated sensory streams into our brains, which get processed into a synthetic construct we experience as reality.
Part of the underlying basic functioning of our brains is each individual column performing the task of predicting which of any of the columns it's connected to will fire next. The better a column is at predicting, the better the brain gets at understanding the world, and biological brains are recursively granular across arbitrary degrees of abstraction.
LLMs aren't inherently incapable of fully emulating human cognition, but the differences they exhibit are expensive. It's going to be far more efficient to modify the architecture, and this may diverge enough that whatever the solution ends up being, it won't reasonably be called an LLM. Or it might not, and there's some clever tweak to things that will push LLMs over the threshold.
No comments yet
Contribute on Hacker News ↗