Comment by chaoz_
12 hours ago
I agree. I never understood LeCun's statement that we need to pivot toward the visual aspects of things because the bitrate of text is low while visual input through the eye is high.
Text and languages contain structured information and encode a lot of real-world complexity (or it's "modelling" that).
Not saying we won't pivot to visual data or world simulations, but he was clearly not the type of person to compete with other LLM research labs, nor did he propose any alternative that could be used to create something interesting for end-users.
Text and language contain only approximate information filtered through humans eyes and brains. Also animals don't have language and can show quite advanced capabilities compared to what we can currently do in robotics. And if you do enough mindfulness you can dissociate cognition/consciousness from language. I think we are lured because how important language is for us humans, but intuitively it's obvious to me language (and LLMs) are only a subcomponent, or even irrelevant for say self driving or robotics.
Seems like that "approximation" is perfectly sufficient for just about any task.
That whole take about the language being basically useless without a human mind to back it lost its legs in 2022.
In the meanwhile, what do those "world model" AIs do? Video generation? Meta didn't release anything like that. Robotics, self-driving? Also basically nothing from Meta there.
In the meanwhile, other companies are perfectly content with bolting multimodal transformers together for robotics tasks. Gemini Robotics being a research example - while modern Tesla FSD stack being a production grade one. Gemini even uses a language transformer as a key part of its stack.
Thats where the research is leading.
The issue is context. trying to make an AI assistant with just text only inputs is doeable but limiting. You need to know the _context_ of all the data, and without visual input most of it is useful.
For example "Where is the other half of this" is almost impossible to solve unless you have an idea of what "this" is.
but to do that you need to have cameras, to use cameras you need to have position, object, and people tracking. And that is a hard problem thats not solved.
the hypothesis is that "world models" solve that with an implicit understanding of the worl and the objects in context
If LeCun's research has made Meta a powerhouse of video generation or general purpose robotics - the two promising directions that benefit from working with visual I/O and world modeling as LeCun sees it - it could have been a justified detour.
But that sure didn't happen.