← Back to context

Comment by rstuart4133

6 days ago

> Context is the plateau. It's why RAM prices are spiking.

Yes, context is the plateau. But I don't think it the bottleneck is RAM. The mechanism described in "Attention is all you need" is O(N^2) where N is the size of the context window. I can "feel" this in everyday usage. As the context window size grows, the model responses slow down, a lot. That's due to compute being serialised because there aren't enough resources to do it in parallel. The resources are more likely compute and memory bandwidth than RAM.

If there is a breakthrough, I suspect it will be models turning the O(N^2) into O(N * ln(N)), which is generally how we speed things up in computer science. That in turn implies abstracting the knowledge in the context window into a hierarchical tree, so the attention mechanism only has to look across a single level in the tree. That in turn requires it to learn and memorise all these abstract concepts.

When models are trained the learn abstract concepts which they near effortlessly retrieve, but don't do that same type of learning when in use. I presume that's because it requires a huge amount of compute, repetition, and time. If only they could do what I do - go to sleep for 8 hours a day, and dream about the same events using local compute, and learn them. :D Maybe, one day, that will happen, but not any time soon.