Comment by xuancanh
14 hours ago
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.
Maybe it's time for Bell Labs 2?
I guess everyone is racing towards AGI in a few years or whatever so it's kind of impossible to cultivate that environment.
The Bell Labs we look back on was only the result of government intervention in the telecom monopoly. The 1956 consent decree forced Bell to license thousands of its patents, royalty free, to anyone who wanted to use them. Any patent not listed in the consent decree was to be licensed at "reasonable and nondiscriminatory rates."
The US government basically forced AT&T to use revenue from its monopoly to do fundamental research for the public good. Could the government do the same thing to our modern megacorps? Absolutely! Will it? I doubt it.
https://www.nytimes.com/1956/01/25/archives/att-settles-anti...
4 replies →
Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.
It seems DeepMind is the closest thing to a well funded blue-sky AI research group, even despite the merger with Google Brain and now more of a product focus.
I thought that was Google. Regulators pretend not to notice their monopoly, they probably get large government contracts for social engineering and surveillance laundered through advertising, and the “don’t be evil” part is they make some open source contributions
https://www.startuphub.ai/ai-news/ai-research/2025/sam-altma...
Like the new spin out Episteme from OpenAI?
I'd argue SSI and Thinking Machines Lab seem to that environment you are thinking about. Industry labs that focuses on research without immediate product requirement.
1 reply →
I am of the opinion that splitting AT&T and hence Bell Labs was a net negative for America and rest of the world.
We are yet to create lab as foundational as Bell Labs.
The fact that people invest on the architecture that keeps getting increasingly better results is a feature, not a bug.
If LLMs actually hit a plateau, then investment will flow towards other architectures.
1 reply →
Why would Bell Labs be a good fit? It was famous for embedding engineers with the scientists to direct research in a more results-oriented fashion.
We call it “legacy DeepMind”
This sounds crazy. We don't even know/can't define what human intelligence is or how it works , but we're trying to replicate it with AGI ?
15 replies →
> I guess everyone is racing towards AGI in a few years
A pipe dream sustaining the biggest stock market bubble in history. Smart investors are jumping to the next bubble already...Quantum...
8 replies →
[flagged]
3 replies →
Meta has the financial oomph to run multiple Bell Labs within its organization.
Why they decided not to do that is kind of a puzzle.
because the business hierarchy clearly couldnt support it. take that for what you will.
More importantly even if you do want it, and there are business situations that support your ambitions. You still have to do get into the managerial powerplay, which quite honestly takes a separate kind of skill set, time and effort. Which Im guessing the academia oriented people aren't willing to do.
Its pretty much dog eat dog at top management positions.
Its not exactly a space for free thinking timelines.
It is not a free thinking paradise in academia either. Different groups fighting for hiring, promotions and influence exist there, too. And it tends to be more pronounced: it is much easier in industry to find a comparable job to escape a toxic environment, so a lot of problems in academia settings steam forever.
But the skill sets to avoid and survive personnel issues in academia is different from industry. My 2c.
> Its not exactly a space for free thinking timelines.
Same goes for academia. People's visions compete for other people's financial budgets, time and other resources. Some dogs get to eat, study, train at the frontier and with top tools in top environments while the others hope to find a good enough shelter.
I would pose a question differently, under his leadership did Meta achieve good outcome?
If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.
If the answer is no, then nothing to discuss here.
Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.
If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."
But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.
Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
9 replies →
I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.
Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.
6 replies →
LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.
Wasn't the original LLaMA developed by FAIR Paris?
3 replies →
then we should ask: will Meta come close enough to the fulfillment of the promises made, or will it keep achieving good enough outcomes?
LLM hostility was warrented. The overhype/downright charlartan nature of ai hype and marketing threatens another AI winter. It happened to cybernetics, it'll happen to us too. The finance folks will be fine, they'll move to the next big thing to overhype, it is the researchers who suffer the fall-out. I am considered anti LLM (transformers anyway) for this reason, i like the the architecture, it is cool amd rather capable at its problem set, which is a unique set, but, it isnt going to deliver any of what has been promised, any more than a plain DNN or a CNN will.
Meta is in last place among the big tech companies making an AI push because of lecun’s llm hostility. Refusing to properly invest in the biggest product breakthrough this century was not even a little bit warranted. He had more than enough resources available to do the research he wanted and create a fantastic open source llm.
Meta has made some fantastic llm's publically avliable many of which continue to outperform all but the qwen series in real world applications.
LLMs cannot do any of the major claims made for them, so competing at the current frontier is a massive resource waste.
Right now a locally running 8b model with large context window (10k tokens+) beat google/openAI models easily on any task you like.
why would anyone then pay for something that is possible to run on consumer hardware with higher token/second throughput and better performance? What exactly have the billions invested given google/oai in return? Nothing more than an existensial crisis I'd say.
Companies aren't trying to force AI costs into their subscription models in dishonest ways because they've got a winning product.
1 reply →
Meta had a two prong AI approach - product-focused group working on LLMs, and blue-sky research (FAIR) working on alternate approaches, such as LeCun's JEPA.
It seems they've given up on the research and are now doubling down on LLMs.
LeCun truly believes the future is in world models. He’s not alone. Good for him to now be in the position he’s always wanted and hopefully prove out what he constantly talks about.
He seems stuck in the GOFAI development philosophy where they just decide humans have something called a "world model" because they said so, and then decide that if they just develop some random thing and call it a "world model" it'll create intelligence because it has the same name as the thing they made up.
And of course it doesn't work. Humans don't have world models. There's no such thing as a world model!
I don't think the focus is really on world models, rather than on animal intelligence based around predicting the real world, but to predict it you need to model it in some sense.
3 replies →
Product companies with deprioritized R&D wings are the first ones to die.
Apple doesn't have an "R&D wing". It's a bad idea to split your company into the cool part and the boring part.
Isn't that why Siri is worse today than it was thirteen years ago?
3 replies →
Hasn't happened to Google yet
Has Google depriortized R&D?
None of Meta's revenue has anything to do with AI at all. (Other than GenAI slop in old people's feeds.) Meta is in the strange position of investing very heavily in multiple fields where they have no successful product: VR, hardware devices, and now AI. Ad revenue funds it all.
LLMs help ads efficiency a lot. policy labels, targeting, adaptive creatives, landing page evals, etc.
Underrated comment
It's very hard (and almost irreconcilable) to lead both Applied Research -- that optimizes for product/business outcomes -- and Fundamental Research -- that optimizes for novel ideas -- especially at the scale of Meta.
LeCun had chosen to focus on the latter. He can't be blamed for not having taken the second hat.
Yes he can. If he wanted to focus on fundamental research he shouldn’t have accepted a leadership position at a product company. He knew going in that releasing products was part of his job and largely blew it.
This is the right take. He is obviously a pioneer and much more knowledgeable than Wang in the field, but if you don't have the product mind to serve company's business interest in short term and long term capacity anymore, you may as well stay in academia and be your own research director, let alone a chief executive in one of the largest public companies
Yann was never a good fit for Meta.
Agreed, I am surprised he is happy to stay this long. He would have been on paper a far better match at a place like pre-Gemini-era Google
Yann was in charge of FAIR which has nothing to do with llama4 or the product focussed AI orgs. In general your comment is filled with misrepresentations. Sad.
FAIR having shit for products is the whole reason he is being demoted/fired. Yes, he had nothing to do with applied research, that was the problem.
Lecun has also consistently tried to redefine open source away from the open source definition.
I totally agree. He appeared to act against his employer and actively undermined Meta's effort to attract talent by his behavior visible on X.
And I stopped reading him, since he - in my opinion - trashed on autopilot everything 99% did - and these 99% were already beyond the two standard deviation of greatness.
It is even more highly problematic if you have absolutely no results eg products to back your claims.
tbf, transformers from more of a developmental perspective are hugely wasteful. they're long-range stable sure, but the whole training process requires so much power/data compared to even slightly simpler model designs I can see why people are drawn to alternative complex model designs down-playing the reliance on pure attention.