← Back to context

Comment by mordymoop

9 days ago

“Changing your mind” doesn’t really look like what LeCun is doing.

If your model of reality makes good predictions and mine makes bad ones, and I want a more accurate model of reality, then I really shouldn’t just make small provisional and incremental concessions gerrymandered around whatever the latest piece of evidence is. After a few repeated instances, I should probably just say “oops, looks like my model is wrong” and adopt yours.

This seems to be a chronic problem with AI skeptics of various sorts. They clearly tell us that their grand model indicates that such-and-such a quality is absolutely required for AI to achieve some particular thing. Then LLMs achieve that thing without having that quality. Then they say something vague about how maybe LLMs have that quality after all, somehow. (They are always shockingly incurious about explaining this part. You would think this would be important to them to understand, as they tend to call themselves “scientists”.)

They never take the step of admitting that maybe they’re completely wrong about intelligence, or that they’re completely wrong about LLMs.

Here’s one way of looking at it: if they had really changed their mind, then they would stop being consistently wrong.