Comment by KaiserPro
6 days ago
> Quite the opposite.
Unless something has dramatically changed, the model is stateless. The context cache needs to be injected before the new prompt, but for what I understand (and please do correct me if I'm wrong) the the context cache isn't that big, like in the order of a few tens of kilobytes. Plus the cache saves seconds of GPU time, so having an extra 100ms of latency is nothing compare to a cache miss. so a broad cache is much much better than a narrow local cache.
But! even if its larger, Your bottleneck isn't the network, its waiting on the GPUs to be free[1]. So whilst having the cache really close ie in the same rack, or same machine, will give the best performance, it will limit your scale (because the cache is only effective for a small number of users)
[1] a 100megs of data shared over the same datacentre network every 2-3 seconds per node isn't that much, especially if you have a partitioned network (ie like AWS where you have a block network and a "network" network)
KV cache for dense models is order 50% of parameters. For sparse moe models it can be significantly smaller I believe, but I don’t think it is measured in kb.