← Back to context

Comment by ben_w

5 days ago

> I dont think memorizing stuff is the same as being smart. https://en.wikipedia.org/wiki/Chinese_room

I agree. The problem I have with the Chinese Room thought experiment is: just as the human who mechanically reading books to answer questions they don't understands does not themselves know Chinese, likewise no neuron in the human brain knows how the brain works.

The intelligence, such as it is, is found in the process that generated the structure — of the translation books in the Chinese room, of the connectome in our brains, and of the weights in an LLM.

What comes out of that process is an artefact of intelligence, and that artefact can translate Chinese or whatever.

Because all current AI take a huge number of examples to learn anything, I think it's fair to say they're not particularly intelligent — but likewise, they can to an extent make up for being stupid by being stupid very very quickly.

But: this definition of intelligence doesn't really fit "can solve novel puzzle", as there's a lot of room for getting good at that my memorising lot of things that puzzle-creators tend to do.

And any mind (biological or synthetic) must learn patterns before getting started: the problem of induction* is that no finite number of examples is ever guaranteed to be sufficient to predict the next item in a sequence, there is always an infinite set of other possible solutions in general (though in reality bounded by 2^n, where n = the number of bits required to express the universe in any given state).

I suspect, but cannot prove, that biological intelligence learns from fewer examples for a related reason, that our brains have been given a bias by evolution towards certain priors from which "common sense" answers tend to follow. And "common sense" is often wrong, c.f. Aristotelian physics (never mind Newtonian) instead of QM/GR.

* https://en.wikipedia.org/wiki/Problem_of_induction