← Back to context

Comment by vlovich123

7 days ago

1. They have many machines to split the load over 2. MoE architecture that lets them shard experts across different machines - 1 machine handles generating 1 token of context before the entire thing is shipped off to the next expert for the next token. This reduces bandwidth requirements by 1/N as well as the amount of VRAM needed on any single machine 3. They batch tokens from multiple users to further reduce memory bandwidth (eg they compute the math for some given weights on multiple users). This reduces bandwidth requirements significantly as well.

So basically the main tricks are batching (only relevant when you have > 1 query to process) and MoE sharding.