Comment by obblekk

2 years ago

"Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time."

Chomsky has a great point here. Humans have such a strong prior for the world that they polarize their beliefs quickly. For most humans, for most thoughts, saying "80% chance X is true" and "I believe X is true" and "I 100% believe X is true" are identical statements.

This is such a strong tendency that most of the Enlightenment was the radical idea that beliefs can be partially updated based on reason and evidence, with less appeal to polarizing emotion. This shows up in day to day learning as well as we predict our way around the world assuming almost everything will behave as it did last time.

In this way, AI learning and human learning are in fact different.

But Chomsky is wrong about some key points. First, an AI that doesn't polarize its beliefs like humans could still achieve human level cognition. It may not come to the same conclusions in the same way, but I don't think this proves it cannot come to conclusions at all.

Chomsky is also wrong that GPT3.x is not a step in the direction. Most of his observations / screenshots are heavily limited by the trust & safety module which was programmed by humans, not learned. Sydney clearly proved the true capabilities.

Finally, I have to say I'm super impressed that Chomsky, 96 years old with many lifetimes worth of contribution to humanity, is still reading dense technical papers like LLMs ability to learn non human grammars. I hope he's able to continue experimenting, reading, and learning.