Comment by lostmsu

6 months ago

> Current LLMs can only introspect from output tokens

The only interpretation of this statement I can come up with is plain wrong. There's no reason LLM shouldn't be able to introspect without any output tokens. As the GP correctly says, most of the processing in LLMs happens over hidden states. Output tokens are just an artefact for our convenience, which also happens to be the way the hidden state processing is trained.

There are no recurrent paths besides tokens. How may I introspect something if it is not an input? I may not.

  • The recurrence comes from replaying tokens during autoregression.

    It's as if you have a variable in a deterministic programming language, only you have to replay the entire history of the program's computation and input to get the next state of the machine (program counter + memory + registers).

    Producing a token for an LLM is analogous to a tick of the clock for a CPU. It's the crank handle that drives the process.

> Output tokens are just an artefact for our convenience

That's nonsense. The hidden layers are specifically constructed to increase the probability that the model picks the right next word. Without the output/token generation stage the hidden layers are meaningless. Just empty noise.

It is fundamentally an algorithm for generating text. If you take the text away it's just a bunch of fmadds. A mute person can still think, an LLM without output tokens can do nothing.

  • I think that's almost completely backwards. The input and output layers just convert between natural language and embeddings i.e. shift the format of the language. But operating on the embeddings is where meaning (locations in vector-space) are transformed.