← Back to context

Comment by tkgally

2 years ago

It’s important to note that when Chomsky writes about “a fundamentally flawed conception of language” or “the science of linguistics,” he is talking about a particular understanding of what language is and a particular subset of linguistics. While some linguists agree with his focus on the mind, grammar, and linguistic competence, others dismiss it as too narrow. Many linguists are more interested in how language is actually used and on its complex roles in human society.

I personally am concerned not so much about whether large language models actually are intelligent as about whether people who interact with them perceive them as being intelligent. The latter, I think, is what will matter most in the months and years ahead.

In a sense, science is a zero sum game. The theories and frameworks you spend a lifetime working on, are ultimately either right or wrong.

What I read from Chomsky seems like a bit of a desperate attempt to ask people not to look over at the thing, because the thing offers a new way of looking at how and where language comes from, and even more amazingly, its testable, empirical and reproducible in a way that Chomskys theories of language can never be.

Dudes whole career is getting relegated to the dust-bin.

  • The same thing happened when CNNs started beating "traditional" computer vision algorithms. There was a lot of push back from computer vision scientists because it basically obsoleted a good chunk of their field.

The concern seems to be precisely that we will unjustifiably perceive them to be intelligent.

  • This is the problem with the word intelligence, is it's a word that implies a gradient, but one that humans don't seem to apply correctly.

    If you take your dog and watch it's behavior you would say it's an intelligent creature. Yet you wouldn't have it file your taxes (unless you were Sam Bankman-Fried of course, the dog probably would have done better). GPT would likely give you far better information here.

    Yet we see computer AI and people automatically assume it has human or super human level intelligence, which LLMs do not, at least at this point. Conversely they do not have 'no intelligence'. We have created some new kind of outside of animal and human intelligence that is not aligned with our expectations.