← Back to context

Comment by wilg

2 years ago

Bizarre article. Just a rant from someone incredibly out-of-touch and who is missing the forest for the trees.

"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response"

We don't know that! It very well could be. Think of all the data that has entered all your senses in your entire lifetime. More than goes into ChatGPT, I'll tell you that. Plus, you synthesize information by being corporeal so you have a tight feedback loop. LLMs could well be a foundational part of AI technology as well as an accurate analog for some of the brain's behavior.

A small part of the point, but bringing up this "hardcoded" response of it not offering political opinions as any kind of evidence of its theoretical capability is beyond silly.

This is arguably bizarre and out of touch comment too, which is merely adds fuel to a fire blazing in the comments section of HN, which is not particularly reputable for its opinions on anything except software (and even then is frequently rather questionable).

^ I hasten to add: some snark intended for effect

It’s a NYT Opinion piece, which means it doesn’t come with citations. Let’s not ignore the medium and it’s conventions here.

It is a bummer that such a weighty argument was in fact conveyed in this citation-free medium, given that Chomsky is engaging with such a weighty subject.

But that is an entirely distant matter.

And it would probably be far more productive to step back and realize the limitations of the medium and instead ask “what are the citations here?” (or seek them out for oneself, or ask for help finding them) and then seek to evaluate them on their specific merits; as opposed to choosing the least charitable interpretation and effectively resorting to an ad hominem (“this man is out of touch; I’m done here.”) or merely saying “we don’t know that!” (ibid.) without any apparent reference to any kind of thoughtful or careful literature regarding the subject at hand.

Unless you too are an established academic with decades of research in a field which is profoundly cognate to neuroscience?

  • ??? I wasn't talking about citations at all?

    • You are questioning Chomsky’s premise, which is almost certainly supported by implicit citations (that do not appear due to the medium they are presented in); your arguments, though not entirely unreasonable, are presumably not

>> Think of all the data that has entered all your senses in your entire lifetime. More than goes into ChatGPT, I'll tell you that.

The question is how much of that was only text data, or only language anyway. Th e answer is- not that much, really. Chomsky's famous point about "the poverty of the stimulus" was based on research that showed human children learn to speak their native languages from very few examples of it spoken by the adults around them. They certainly don't learn from many petabytes of text as in the entire web.

If you think about it, if humans relied on millions of examples to learn to speak a language we would never have learned to speak in the first place. Like, back whenever we started speaking as a species. There was certainly nothing like human language back then, so there weren't any examples to learn from. Try that for "zero-shot learning".

Then again, there's the issue that there are many, many animals that receive the same, or even richer, "data" from their senses throughout their lives, and still never learn to speak a single word.

Humans don't just learn from examples, and the way we learn is nothing like the way in which statistical machine learning algorithms learn from examples.

  • Thinking about it as "text data" is both your and Chomsky's problem -- the >petabytes of data aren't preprocessed into text. They're streams of sensory input. It's not zero shot if it's years of data of observing human behavior through all your senses.

    Other animals receiving data and not speaking isn't a good line of argument, I think. They could have very different hardware or software in their brains, and have completely different life experiences and therefore receive very different data. Notably, where animals and humans do have much potentially learned (or learned through evolution) behavior in common -- such as pathfinding, object detection, hearing, and high level behaviors like seeking food and whatever else.

    • >> Thinking about it as "text data" is both your and Chomsky's problem -- the >petabytes of data aren't preprocessed into text. They're streams of sensory input. It's not zero shot if it's years of data of observing human behavior through all your senses.

      I'm a little unsure what you mean. I think you mean that humans learn language not just from examples of language, but from examples of all kinds of concepts in our sensory input, not just language?

      Well, that may or may not be the case for humans, but it's certainly not the case for machine learning systems. Machine learning systems must be trained with examples of a particular concept, in order to learn that concept and not another. For instance, language models must be trained with examples of language, otherwise they can't learn language.

      There are multi-modal systems that are trained on multiple "modalities" but they can still not learn concepts for which they are not given specific examples. For instance, if a system is trained on examples of images, text and time series, it will learn a model of images, text and time series, but it won't be able to recognise, say, speech.

      As to whether humans learn that way: who says we do? Is that just a conjecture proposed to support your other points, or is it something you really think is the case, and believe, based on some observations etc?

      2 replies →

    • Not OP, but I'm not convinced by the talking point that a baby has an equivalent or greater petabytes of data because they are immersed in a sensory world. I can't quite put my finger on it but my feeling is that that line of reasoning contains a kind of category error. Maybe I'll wake up tomorrow and have a clearer idea of my objection, but I've seen your talking point echoed by many others as well, and this interests me.

      1 reply →

The article was full of cherry-picked examples and straw man style argumentative techniques. Here are a few ways I have used ChatGPT (via MS Edge Browser AddOn) recently:

- Generate some Dockerfile code snippets (which had errors, but I still found useful pointing me in the right direction).

- Help me with a cooking recipe where it advised that I should ensure the fish is dry before I cook it in olive oil (otherwise the oil will splash).

- Give me some ideas for how to assist a child with a homework assignment.

- Travel ideas for a region I know well, yet, I had not heard of the places it suggested.

- Movie recommendations

Yes, there are a lot of caveats when using ChatGPT, but the technology remains remarkable and will presumably improve quickly. On the downside, these technologies give even more power to tech companies that already have too much of it.

Yeah, this is actually really ridiculous... the human mind is nothing *but* a pattern matcher. It's like this writer has no knowledge of neuroscience at all, but wants to opine anyway.

  • >> "the human mind is nothing but a pattern matcher"

    wow, tell me you know only a tiny bit of neuroscience without telling me you know only a tiny bit of neuroscience ...

    For starters, the myriad info filtering functions from the sub-neuron level up to the structural level are entirely different from pattern matching (and are not in these LLMs)