Comment by freejazz

2 years ago

"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."

Chomsky and the authors are making an information/computational complexity argument there, which I tend to agree with. But naysayers have read the exact same passage and replied: a) AI don't have to resemble human intelligence at all, b) Nah, rationality is an illusion and we are just pattern matchers, or c) Actually we do get through terabytes of data, just not linguistic data but merely existing and interacting in the world.

I don't think any of those are good rebuttals. But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there.

  • They can respond with whatever they want, it doesn't make their response justified or thoughtful (b, c). A is basically a non-sequitur, Chomsky isn't saying it has to, he's responding to the people that are saying ChatGPT reflects a human-like intelligence.

    • Then in other words I wish Chomsky's Op Ed to the public nipped off the low hanging fruit of unjustified, thoughtless nonsequiturs. There's value in that.

      The number of people who understand Chomsky's argument is not very many in the world. I think the reason is because a computational background is necessary. To the extent that an Op Ed is a great opportunity to educate people, he could've had the article especially the first part written so that more regular people could find the argument accessible. And yes that means anticipating typical objections even if you or I mind find them superficial or wrong.

      2 replies →

  • Huh.. I think the reason they don't go in-depth and fail to "debunk" those types of rebuttals, as does anybody else for that matter (including you), is because they can't actually do it. Feel free to prove me wrong though.

    I don't believe we're stochastic parrots - but those articles, and even this comment section which contains little more than dogmatic assertions almost makes me doubt it.

  • "But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there."

    Chomsky is writing an opinion article in the NYT, not a paper for an academic journal. I don't think there's room in this style for the kind of proofing that would be needed. And further, Chomsky spent his whole career expounding on his theories of linguistics and a philosophy of mind. The interested reader can look elsewhere.

    He's writing an opinion piece which invites the reader to explore those topics, which could not fit into this style of article.

    • It undercuts the whole piece. The pronouncements feel question begging. Much of the article suggests someone who hasn't actually bothered to spend much time asking ChatGPT the questions he is so confident it can't answer well. He also doesn't seem aware that ChatGPT is a deliberately tamed model that is trying desperately to shy away from saying anything too controversial, and that was a choice made by openai, not something that highlights limitations with language models in general (or if it does, it's an entirely different, political question than technical question).

      I accept that it's possible that he has some deep reasoning for the surface level arguments he's making that would make them less arbitrary seeming, but he hasn't even hinted at them in the article.

  • I think c) is relevant in that he could compare different things.

    A different way would be to consider the child learning a language to be a fine tuning operation over the pretrained human brain.

    By comparison, fine tuning from GPT-3 to chatGPT is a much smaller gulf on data and computation efficiency

  • Great way to break down responses. I always dislike seeing B in the wild.

    I feel like I often see it in more "doomer" communities or users.

I feel like saying the human mind doesn't operate on huge amounts of data is somewhat misleading - every waking moment of our lives we are consuming quite a large stream of data. If you put a human in a box and only gave it the training data chatGPT gets I dont think you'd get a functional human out of it.

Actually the structure of ChatGPT was formed by hammering it with phenomenal amounts of information. When you give it a prompt and ask it to do a task, it's working off a surprisingly small amount of information.

The training of ChatGPT is more accurately compared with the evolution of the brain, and a human answering a question is much more like the information efficient prompt/response interaction.