Comment by specialist
15 days ago
This is where I'm stuck.
For other commentators, as I understand it, Chomsky's talking about well-defined grammar and language and production systems. Think Hofstadter's Godel Escher Bach. Not "folk" understanding of language.
I have no understanding or intuition, or even a finger nail grasp, for how an LLM generates, seemingly emulating, "sentences", as though created with a generative grammar.
Is any one comparing and contrasting these two different techniques? Being noob, I wouldn't even know where to start looking.
I've gleaned that someone(s) are using LLM/GPT to emit abstract syntax trees (vs a mere stream of tokens), to serve as input for formal grammars (eg programming source code). That sounds awesome. And something I might some day sorta understand.
I've also gleaned that, given sufficient computing power, training data for future LLMs will have tokenized words (vs just character sequences). Which would bring the two strategies closer...? I have no idea.
(Am noob, so forgive my poor use of terminology. And poor understanding of the tech, too.)
I don't really understand your question but if a deep neural network predicts the weather we don't have any problem accepting that the deep neural network is not an explanatory model of the weather (the weather is not a neural net). The same is true of predicting language tokens.
Apologies, I don't know enough to articulate my question, which is probably nonsensical any way.
LLMs (like GPT) and grammars (like Backus–Naur Form) are two different kinds of generative (production) systems, right?
You've been (heroically) explaining Chomsky's criticism of LLMs to other noobs: grammars (theoretically) explain how humans do language, which is very different from how ChatGPT (stochastic parrots) do language. Right?
Since GPT mimics human language so convincingly, I've been wondering if there's any overlap of these two generative systems.
Especially once the (tokenized) training data for GPTs is word based instead of just snippets of characters.
Because I notice grammars everywhere and GPT is still magic to me. Maybe I'd benefit if I could understand GPTs in terms of grammars.
> Since GPT mimics human language so convincingly, I've been wondering if there's any overlap of these two generative systems.
It's not really relevant if there is overlap, I'm sure you can list a bunch of ways they are similar. What's important is 1. if they are different in fundamental ways and 2. whether LLMs explain anything about the human language faculty.
For 1. the most important difference is that human languages appear to have certain constraints (roughly that language has parse tree/hierarchical structure) and (from the experiments of Moro) humans seem to not be able to learn arguably simpler structures that are not hierarchical. LLMs on the other hand can be trained on those simpler structures. That shows that the acquisition process is not the same, which is not surprising since neural networks work on arbitrary statistical data and don't have strong inductive biases.
For 2. even if it turned out that LLMs couldn't learn the same languages it doesn't explain anything. For example you could hard-code the training to fail if it detects an "impossible language" then what? You've managed to create an accurate predictor but you don't have any understanding of how or why it works. This is easier to understand with non-cognitive systems like the weather or gravity: If you create a deep neural network that accurately predicts gravity it is not the same as coming up with the general theory of relativity (which could in fact be a worse predictor for example at quantum scales). Everyone argues the ridiculous point that since LLMs are good predictors then gaining understanding about the human language faculty is useless, which is a stance that wouldn't be accepted for the study of gravity or in any other field.
> is not an explanatory model of the weather (the weather is not a neural net)
I don't follow. Aren't those entirely separate things? The most accurate models of anything necessarily account for the underlying mechanisms. Perhaps I don't understand what you mean by "explanatory"?
Specifically in the case of deep neural networks, we would generally suppose that it had learned to model the underlying reality. In effect it is learning the rules of a sufficiently accurate simulation.
> The most accurate models of anything necessarily account for the underlying mechanisms
But they don't necessarily convey understanding to humans. Prediction is not explanation.
There is a difference between Einstein's General Theory of Relativity and a deep neural network that predicts gravity. The latter is virtually useless for understanding gravity (that's even if makes better predictions).
> Specifically in the case of deep neural networks, we would generally suppose that it had learned to model the underlying reality. In effect it is learning the rules of a sufficiently accurate simulation.
No, they just fit surface statistics, not underlying reality. Many physics phenomena were predicted using theories before they were observed, they would not be in the training data even though they were part of the underlying reality.
3 replies →