Comment by WarmWash

5 hours ago

Language really only exists at the input and output surfaces of the models. In the middle it's all numerical values. Which you might be quick in relating to just being a numeric cypher of the words, which while not totally false, it misses that it is also a numeric cypher of anything. You can train a transformer on anything that you can assign tokens to.

That's not my point. I'm talking about something far more mundane - transformers do inference over raw tokens and perform an n^2 loop over tokens, but tokens are itself the context. So it's better to have more raw tokens in your input that all nudge it to the right idea space, even if technically it doesn't need all those tokens. ICL and CoT have a lot of study into them at this point, these are well known phenomena.

This applies to any transformer-based architecture including JEPA which tries to make the tokens predict some kind of latent space (in which I've separately heard arguments as to why the two are equivalent, but that's a different discussion.)

Similarly, none of our comments actually exist as language on Hacker News—just numerical values from the ASCII table. We're deluding each other into thinking we're using language.

  • I believe it's reasonably clear that our thought processes generally occur outside of language. We do use language during explicit reasoning, but most thinking occurs heuristically. It's on par with the thinking of animals that don't use language but do complex behavior.

    It not clear to me how well that maps onto LLMs. Our wetware predates language, and isn't derived from it. Language is built on top. LLMs are derived from language. I think that means that the intermediate layers are very different from the brain neurons, but I don't know. It's eerie how well the former emulates the latter.