← Back to context

Comment by Jackson__

12 hours ago

From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.

I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.

The obvious explanation is they have scaled it up, but it turned out to be total shite, like most new architectures.