Comment by sshh12

14 days ago

Not that I agree with all the linked points but it is weird to me that LeCun consistently states LLMs are not the right path yet LLMs are still the main flagship model they are shipping.

Although maybe he's using an odd definition for what counts as a LLM.

https://www.threads.net/@yannlecun/post/DD0ac1_v7Ij?hl=en

> LeCun consistently states LLMs are not the right path yet LLMs are still the main flagship model they are shipping.

I really don't see what's controversial about this. If that's to mean that LLMs are inherently flawed/limited and just represent a local maxima in the overall journey towards developing better AI techniques, I thought that was pretty universal understanding by now.

That is how I read it. Transformer based LLMs have limitations that are fundamental to the technology. It does not seem crazy to me that a guy involved in research at his level would say that they are a stepping stone to something better.

What I find most interesting is his estimate of five years, which is soon enough that I would guess he sees one or more potential successors.

  • In our field (AI) nobody can see even 5 months ahead, including people who are training a model today to be released 5 months from now. Predicting something 5 years from now is about as accurate as predicting something 100 years from now.