← Back to context

Comment by andblac

1 day ago

At first glance, this reminds me of how branch prediction is utilized in CPUs to speedup execution. As I understand it, this development is like a form of soft branch prediction over language trajectories: a small model predicts what the main model will do, takes few steps ahead and then verifies the results (and this can be done in parallel). If it checks out, you just jump forward, it not you take miss but its rare. I find it funny how small-big ideas like this come up in different context again and again in history of our technological development. Of course ideas as always are cheap. The hard part is how to actually use them and cash in on them.

A lot of optimizations in LLMs now are low hanging fruits inspired by techniques in classical computer science. Another one that comes to mind is paged KV caching which is based on memory paging.