← Back to context

Comment by HarHarVeryFunny

1 year ago

We really don't need time to tell.

Just making a transformer bigger and bigger, and feeding it more and more data, will not change it from being a language model into something else, anymore than scaling up an expert system such as Cyc will transform it into something other than an expert system. "Scale it up and it'll become sentient" is one of the recurring myths of AI.. a bit odd that people are falling for it again.

As an aside, it seems reasonable to consider an LLM as a type of expert system - one that has a broad area of expertise (like Cyc), including (unlike Cyc) how to infer rules from language and generate language from rules.

If you want to create a brain-like AGI, then you need an entire cognitive architecture, not just one piece of it which is what we have currently with LLMs. Compared to a brain, an LLM is maybe just like the cortex (without all the other brain parts like cerebellum, hippocampus, hypothalamus and interconnectivity such as the cortico-thalamic loop). It's as if we've cut the cortex out of a dead person's brain, put it in a mason jar to keep it alive, and hooked it's inputs and outputs up to a computer. Feed words in, get words out. Cool, but it's not a whole brain, it's a cortex in a mason jar.

Well said. This has always been my fundamental problem with the claims about large language models' current or eventual capabilities: most of the things people claim it can or will be able most of the things people claim it can or will be able to do require a neural architecture completely different from the one it has, and no amount of scaling up the number of neurons and the amount of training data used will change that fundamental architecture, and at a very basic level the capabilities of any neural network are going to be limited by its architecture. We would need to add some kind of advanced recursive structure to large language models, as well as some kind of short-term and working memory, as well as probably many other structures, to make them capable of the kind of metacognition necessary to properly do a lot of the things people want them to be able to do. Without metacognition, the ability to analyze what one is currently thinking and think new things based on that analysis, and therefore to look at what one is thinking and error correct it, consciously adjust it or iterate on it, or consciously ensure that one is adhering to certain principles of reasoning or knowledge, we can't expect large language models to be able to actually understand Concepts and principles and how they are applicable and reliably perform reasoning or even obey instructions.

>will not change it from being a language model into something else,

This is a pretty empty claim when we don't know what the limits of language modelling are. Of course it will never not be a language model. But the question is what are the limits of capability of this class of computing device?

  • Some limit's are pretty obvious, even if easy to fix.

    For example, a pure LLM is just a single pass through a stack of transformer layers, so there is no variable depth/duration (incl. iteration/looping) of thought and no corresponding or longer duration working memory other than the embeddings as they pass thru. This is going to severely limit their ability to plan and reason since you only get a fixed N layers of reasoning regardless of what they are asked.

    Lack of working memory (really needs to be context duration, or longer, not depth duration) has many predictable effects.

    No doubt we will see pure-transformer architectures extended to add more capabilities, so I guess the real question is how far these extensions (+scaling) will get us. I think one thing we can be sure of though is that it won't get us to AGI (defining AGI = human-level problem solving capability) unless we add ALL of the missing pieces that the brain has, not just a couple of the easy ones.

Thanks for that final paragraph! I'm going to quote you from now on, when trying to explain to someone (for the thousandth time) why ChatGPT isn't about to become super-intelligent and take over the world.