Comment by martythemaniak

14 days ago

LeCun fundamentally doesn't think bigger and better LLMs will lead to anything resembling "AGI", although he thinks they may be some component of AGI. Also, he leads the research division, increasing context length from 2M to 10M is not interesting to him.

He thinks LLMs are a local maxima, not the ultimate one.

Doesn't mean that a local maxima can't be useful!

  • If that's what he said, I'd be happy, but I was more concerned about this:

    > His belief is so strong that, at a conference last year, he advised young developers, "Don't work on LLMs. [These models are] in the hands of large companies, there's nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs."

    It's ok to say that we'll need to scale other mountains, but I'm concerned that the "Don't" there would push people away from the engineering that would give them the relevant inspiration.

    • > but I'm concerned that the "Don't" there would push people away from the engineering that would give them the relevant inspiration.

      You have way more yay-sayers than nay-sayers, there is never a risk that we don't go hard enough into the current trends, there is however a risk that we go too hard into it and ignore other paths.

But ... that's not how science works. There are a myriad examples of engineering advances pushing basic science forward. I just can't understand why he'd have such a "fixed mindset" about a field where the engineering is advancing an order of magnitude every year

  • > But ... that's not how science works

    Not sure where this is coming from.

    Also, it's important to keep in mind the quote "The electric light did not come from the continuous improvement of candles"

    • Well, having candles and kerosene lamps to work late definitely didn't hurt.

      But in any case, while these things don't work in a predictable way, the engineering work on lightbulbs in your example led to theoretical advances in our understanding of materials science, vacuum technology, and of course electrical systems.

      I'm not arguing that LLMs on their own will certainly lead directly to AGI without any additional insights, but I do think that there's a significant chance that advances in LLMs might lead engineers and researchers to inspiration that will help them make those further insights. I think that it's silly that he seems to be telling people that there's "nothing to see here" and no benefit in being close to the action.

      1 reply →

  • Listening so Science Friday today on NPR, the two guests did not think AGI was a useful term and it would be better to focus on how useful actual technical advances are than some sort of generalized human-level AI, which they saw as more of a marketing tool that's ill-defined, except in the case of makes the company so many billions of dollars.