← Back to context

Comment by keybored

1 month ago

> “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin”.

> He points out that throughout AI’s history, researchers have repeatedly tried to improve systems by building in human domain knowledge. The “bitter” part of the title comes from what happens next: systems that simply use more computing power end up outperforming these carefully crafted solutions. We’ve seen this pattern in speech recognition, computer chess, and computer vision. If Sutton wrote his essay today, he’d likely add generative AI to that list. And he warns us: this pattern isn’t finished playing out.

According to Chomsky (recalled in relatively recent years for him) this is why he didn’t want to work on the AI/linguistics intersection when someone asked him in the mid 50’s. Because he thought that successful approaches would just use machine learning and have nothing to do with linguistics (that he cares about).

Seems intellectually dishonest (given how his theory of universal grammar is all about indispensible biological factors in human languages).