LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics

13 hours ago (arxiv.org)

This Yann LeCun lecture is a nice summary of the conceptual model behind JEPA (+ why he isn't a fan of autoregressive LLMs): https://www.youtube.com/watch?v=yUmDRxV0krg

  • Is there a summary? Every time I try to understand more about what LeCun is saying all I see are strawmans of LLMs (like claims that LLMs cannot learn a world model or that next token prediction is insufficient for long-range planning). There are lots of tweaks you can do to LLMs without fundamentally changing the architecture, e.g. looped latents, adding additional models as preprocessors for input embeddings (in the way that image tokens are formed)

    I can buy that a pure next-token prediction inductive bias for training might be turn out to be inefficient (e.g. there's clearly lots of information in the residual stream that's being thrown away), but it's not at all obvious a priori to me as a layman at least that the transformer architecture is a "dead end"

    • That's the issue I have with criticism of LLMs.

      A lot of people say "LLMs are fundamentally flawed, a dead end, and can never become AGI", but on deeper examination? The arguments are weak at best, and completely bogus at worst. And then the suggested alternatives fail to outperform the baseline.

      I think by now, it's clear that pure next token prediction as a training objective is insufficient in practice (might be sufficient in the limit?) - which is why we see things like RLHF, RLAIF and RLVR in post-training instead of just SFT. But that says little about the limitations of next token prediction as an architecture.

      Next token prediction as a training objective still allows an LLM to learn an awful lot of useful features and representations in an unsupervised fashion, so it's not going away any time soon. But I do expect to see modified pre-training, with other objectives alongside it, to start steering the models towards features that are useful for inference early on.

    • The criticisms are not strawmans, are actually well grounded on math. For instance, promoting energy based models.

      In a probability distribution model, the model is always forced to output a probability for a set of tokens, even if all the states are non sense. In an energy based model, the model can infer that a states makes no sense at all and can backtrack by itself.

      Notice that diffusion models, DINO and other successful models are energy based models, or end up being good proxies of the data density (density is a proxy of entropy ~ information).

      Finally, all probability models can be thought as energy based, but not all EBM output probabilities distributions.

      So, his argument is not against transformers or the architectures themselves, but more about the learned geometry.

I am a bit confused by the benchmark comparison they are doing. The comparison of a domain specific "LeJEPA" on astronomy images against general models, which are not explicitly fine-tuned on astronomy images seems misleading to me.

Does anybody understand why that benchmark might still be reasonable?

  • The comparison is against general models which are explicitly fine-tuned. Specifically, they pre-train their models on unlabeled in-domain images and take DINO models pre-trained on internet-scale general images, then fine-tune both of them on a small number of labeled in-domain images.

    The idea is to show that unsupervised pre-training on your target data, even if you don't have a lot of it, can beat transfer learning from a larger, but less focused dataset.

> using imagenet-1k for pretraining

Lecun still can't show JEPA competitive at scale with autoregressive LLM.

jepa shows little promise over traditional objectives in my own experiments

  • what type of experiments did you run in less than a week to be so dismissing? (seriously curious)

    • JEPA has been around for quite a while now, so many labs have had time to assess its viability.

More optimistic signal it’s very early innings in the architectural side of AI, with many more orders of magnitude power-to-intelligence efficiency to come, and less certainty today’s giants’ advantages will be durable.

  • I've seen too many "architectural breakthroughs" that failed to accomplish anything at all to be this bullish on architectural gains.