← Back to context

Comment by woeirua

4 hours ago

> I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence.

It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.

There is clearly something exceptional (in the true neutral sense of the word) about humans, or more broadly the Homo genus.

If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.

I agree, it has become more and more irrelevant whether AI meets a given definition of intelligence when I can talk with it and it understands what I am saying, including a shocking level of nuance.

I think the exceptionalism is the other way around. What makes anyone think they understand what makes for intelligence when we barely understand our own neurology?

  • I'm reminded of a book on my bookshelf (which I still haven't read, story of my life...), by the recently deceased ethologist Frans de Waal, titled 'Are We Smart Enough to Know How Smart Animals Are?'. Of course, Betteridge's law applies to its title.

    In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.

    • Hey, I also read that book, and came to basically the opposite conclusion!

      The point of the book is that we've been very bad at testing animal intelligence because of a vast stack of human biases, including things like language and the geometry of our hands.

      Animals with different geometries and no language are still intelligent, but we need to test them in ways which recognize their capabilities. Intelligence is general: it's adaptivity within one's set of constraints.

      De waal also points out that there was massive shifting of the definition of language and intelligence as we became more aware of what animals are capable of.

      From this angle, I would say that LLMs are intelligent: they do adapt to their inputs extremely readily, though they have a particular set of constraints (no physical body (usually), for starters). They are, like chimpanzees, smarter and more capable than humans in some ways, and much dumber in others.

      Finally, the 'statistical learners can't be intelligent' line of argument is extremely short-sighted. Our brains are bags of electrified meat. Evolution somehow figured out a way to make meat think. No individual neutron is intelligent, yet the collection of cells is. We learn by processing experiences with hormonal signals because those hormonal signals are what the meat is capable of working with. LLMs, by contrast, learn by processing examples with backprop. If anything, the intelligence of meat is more surprising.

    • The meaning of tokens lose touch with language in the deeper layers of large language model’s neural nets.

      Language is just the input/output modality.

      2 replies →