Comment by zozbot234
6 days ago
> Inference is (mostly) stateless. ... you just need to route mostly small amounts of data to a bunch of big machines.
I think this might just be the key insight. The key advantage of doing batched inference at a huge scale is that once you maximize parallelism and sharding, your model parameters and the memory bandwidth associated with them are essentially free (since at any given moment they're being shared among a huge amount of requests!), you "only" pay for the request-specific raw compute and the memory storage+bandwidth for the activations. And the proprietary models are now huge, highly-quantized extreme-MoE models where the former factor (model size) is huge and the latter (request-specific compute) has been correspondingly minimized - and where it hasn't, you're definitely paying "pro" pricing for it. I think this goes a long way towards explaining how inference at scale can work better than locally.
(There are "tricks" you could do locally to try and compete with this setup, such as storing model parameters on disk and accessing them via mmap, at least when doing token gen on CPU. But of course you're paying for that with increased latency, which you may or may not be okay with in that context.)
> The key advantage of doing batched inference at a huge scale is that once you maximize parallelism and sharding, your model parameters and the memory bandwidth associated with them are essentially free (since at any given moment they're being shared among a huge amount of requests!)
Kind of unrelated, but this comment made me wonder when we will start seeing side channel attacks that force queries to leak into each other.
I asked a colleague about this recently and he explained it away with a wave of the hand saying, "different streams of tokens and their context are on different ranks of the matrices". And I kinda believed him, based on the diagrams I see on Welch Labs YouTube channel.
On the other hand, I've learned that when I ask questions about security to experts in a field (who are not experts in security) I almost always get convincing hand waves, and they are almost always proven to be completely wrong.
Sigh.
mmap is not free. It just moves bandwidth around.
Using mmap for model parameters allows you to run vastly larger models for any given amount of system RAM. It's especially worthwhile when you're running MoE models and parameters for unused "experts" can just be evicted from RAM, leaving room for more relevant data. But of course this applies more generally to, e.g. single model layers, etc.