Comment by sailingparrot

3 months ago

I’m not arguing about efficiency though ? Simply saying next token predictors cannot be thought of as actually just thinking about the next token with no long term plan.

They rebuild the "long term plan" anew for every token: there's no guarantee that the reconstructed plan will remain similar between tokens. That's not how planning normally works. (You can find something like this every time there's this kind of gross inefficiency, which is why I gave the general principle.)

  • > They rebuild the "long term plan" anew for every token

    Well no, there is attention in the LLM which allows it to look back at it's "internal thought" during the previous tokens.

    Token T at layer L, can attend to a projection of the hidden states of all tokens < T at L. So its definitely not starting anew at every token and is able to iterate on an existing plan.

    Its not a perfect mechanism for sure, and there is work to make LLMs able to carry more information forward (e.g. feedback transformers), but they can definitely do some of that today.

  • Actually, due to using causal (masked) attention, new tokens appended to the input don't have any effect on what's calculated internally (the "plan") at earlier positions in the input, and a modern LLM therefore uses a KV cache rather than recalculating at those earlier positions.

    In other words, the "recalculated" plan will be exactly the same as before, just extended with new planning at the position of each newly appended token.

    • You can violate the plan in the sampler by making an "unreasonable" choice of next token to sample (eg by raising the temperature.) So if it does stick to the same plan, it's not going to be a very good one.

      4 replies →

  • Right, and this is what "reasoning LLMs" work around by having explicitly labelly "reasoning tokens".

    This lets them "save" the plan between tokens, so when regenerating the new token it is following the plan.