Comment by throwaw12
14 hours ago
I would pose a question differently, under his leadership did Meta achieve good outcome?
If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.
If the answer is no, then nothing to discuss here.
Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.
If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."
But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.
Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
Philosophers are usually more aware of their not knowing than you seem to give them credit for. (And oracles are famously vague, too).
Do you know that all formally trained researchers have Doctor of Philosophy or PhD to their name? [1]
[1] Doctor of Philosophy:
https://en.wikipedia.org/wiki/Doctor_of_Philosophy
1 reply →
he probably predicted the asymptote everyone is approaching right now
1 reply →
He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.
I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.
3 replies →
I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.
Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.
most papers are also written in the same language, what's your point?
is an american model chinese because chinese people were in the team?
4 replies →
LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.
Wasn't the original LLaMA developed by FAIR Paris?
I hadn't heard that, but he was heavily involved in a cancelled project called Galactica that was an LLM for scientific knowledge.
2 replies →
then we should ask: will Meta come close enough to the fulfillment of the promises made, or will it keep achieving good enough outcomes?