Comment by calf

2 years ago

Chomsky and the authors are making an information/computational complexity argument there, which I tend to agree with. But naysayers have read the exact same passage and replied: a) AI don't have to resemble human intelligence at all, b) Nah, rationality is an illusion and we are just pattern matchers, or c) Actually we do get through terabytes of data, just not linguistic data but merely existing and interacting in the world.

I don't think any of those are good rebuttals. But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there.

They can respond with whatever they want, it doesn't make their response justified or thoughtful (b, c). A is basically a non-sequitur, Chomsky isn't saying it has to, he's responding to the people that are saying ChatGPT reflects a human-like intelligence.

  • Then in other words I wish Chomsky's Op Ed to the public nipped off the low hanging fruit of unjustified, thoughtless nonsequiturs. There's value in that.

    The number of people who understand Chomsky's argument is not very many in the world. I think the reason is because a computational background is necessary. To the extent that an Op Ed is a great opportunity to educate people, he could've had the article especially the first part written so that more regular people could find the argument accessible. And yes that means anticipating typical objections even if you or I mind find them superficial or wrong.

    • I don't think this is intended to convince the people who are already confidently making the incorrect arguments you cite. I think it's intended to reassure the less-terminally-online people who are hearing those arguments, from a perspective of some understanding and authority, that they're not true.

    • It did, but laymen will respond as laymen do, regardless. They already don't understand it, so they are just going to repeat their tautological end run... "well, whose to say we don't do the same thing... so therefor who is to say it is not doing the same thing as us?" When one of the first paragraphs is "Humans do not behave this way at all".

      >The number of people who understand Chomsky's argument is not very many in the world.

      I don't think the argument is inaccessible. Some people just don't want to hear things, that's not really unusual in today's intellectual economy.

Huh.. I think the reason they don't go in-depth and fail to "debunk" those types of rebuttals, as does anybody else for that matter (including you), is because they can't actually do it. Feel free to prove me wrong though.

I don't believe we're stochastic parrots - but those articles, and even this comment section which contains little more than dogmatic assertions almost makes me doubt it.

"But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there."

Chomsky is writing an opinion article in the NYT, not a paper for an academic journal. I don't think there's room in this style for the kind of proofing that would be needed. And further, Chomsky spent his whole career expounding on his theories of linguistics and a philosophy of mind. The interested reader can look elsewhere.

He's writing an opinion piece which invites the reader to explore those topics, which could not fit into this style of article.

  • It undercuts the whole piece. The pronouncements feel question begging. Much of the article suggests someone who hasn't actually bothered to spend much time asking ChatGPT the questions he is so confident it can't answer well. He also doesn't seem aware that ChatGPT is a deliberately tamed model that is trying desperately to shy away from saying anything too controversial, and that was a choice made by openai, not something that highlights limitations with language models in general (or if it does, it's an entirely different, political question than technical question).

    I accept that it's possible that he has some deep reasoning for the surface level arguments he's making that would make them less arbitrary seeming, but he hasn't even hinted at them in the article.

I think c) is relevant in that he could compare different things.

A different way would be to consider the child learning a language to be a fine tuning operation over the pretrained human brain.

By comparison, fine tuning from GPT-3 to chatGPT is a much smaller gulf on data and computation efficiency

Great way to break down responses. I always dislike seeing B in the wild.

I feel like I often see it in more "doomer" communities or users.