Comment by deoxykev
3 months ago
I don't think autoregressive models have a fundemental difference in terms of reasoning capability in latent space vs token space. Latent space enables abstract reasoning and pattern recognition, while token space acts as both the discrete interface for communication, and as a interaction medium to extend, refine and synthesize high order reasoning over latent space.
Intuively speaking, most people think of writing as a communication tool. But actually it's also a thinking tool that helps create deeper connections over discrete thoughts which can only occupy a fixed slice of our attention at any given time. Attentional capacity the primary limitation-- for humans and LLMs. So use the token space as extended working memory. Besides, even the Coconut paper got mediocre results. I don't think this is the way.
I appreciate your argument, but add the following nuance:
Latent space reasoning can represent and manipulate UNCERTAINTY more concisely and elegantly than token space reasoning.
If uncertainty is an important signal then a model RL conditioned to perform good COT should be expected to learn how to encode an uncertainty sidechannel in its COT.
If we're fortunate it'll do so using language choice that would also convey uncertainty to humans. Before you complain that English uncertainty has poor precision, consider that nothing prevents the LLM from overloading it with a more precise meaning. Like how "MAY" in an RFC means something much more concrete than in general English. Though unless somehow conditioned for it the uncertainty signal could be something else entirely (including, perhaps, sounding more certain).
This also goes for pretty much any other side information you might hope could be conveyed.