Comment by sshh12
14 days ago
Not that I agree with all the linked points but it is weird to me that LeCun consistently states LLMs are not the right path yet LLMs are still the main flagship model they are shipping.
Although maybe he's using an odd definition for what counts as a LLM.
> LeCun consistently states LLMs are not the right path yet LLMs are still the main flagship model they are shipping.
I really don't see what's controversial about this. If that's to mean that LLMs are inherently flawed/limited and just represent a local maxima in the overall journey towards developing better AI techniques, I thought that was pretty universal understanding by now.
local maximum that keeps rising and no bar/boundary in sight
Even a narrow AI can get better with no bar in sight, but it will never get to AGI. That is the argument here.
That is how I read it. Transformer based LLMs have limitations that are fundamental to the technology. It does not seem crazy to me that a guy involved in research at his level would say that they are a stepping stone to something better.
What I find most interesting is his estimate of five years, which is soon enough that I would guess he sees one or more potential successors.
In our field (AI) nobody can see even 5 months ahead, including people who are training a model today to be released 5 months from now. Predicting something 5 years from now is about as accurate as predicting something 100 years from now.
Which would be nice if LeCun hadn't predicted the success of neural networks more broadly about 30 years before most others.
3 replies →
[dead]