Comment by regularfry

1 day ago

This isn't quite right: it'll run with the full model loaded to RAM, swapping in the experts as it needs. It has turned out in the past that experts can be stable across more than one token so you're not swapping as much as you'd think. I don't know if that's been confirmed to still be true on recent MoEs, but I wouldn't be surprised.

Also, though nobody has put the work in yet, the GH200 and GB200 (the NVIDIA "superchips" support exposing their full LPDDR5X and HBM3 as UVM (unified virtual memory) with much more memory bandwidth between LPDDR5X and HBM3 than a typical "instance" using PCIE. UVM can handle "movement" in the background and would be absolutely killer for these MoE architectures, but none of the popular inference engines actually allocate memory correctly for these architectures: cudaMallocManaged() or allow UVM (CUDA) to actually handle movement of data for them (automatic page migration and dynamic data movement) or are architected to avoid pitfalls in this environment (being aware of the implications of CUDA graphs when using UVM).

It's really not that much code, though, and all the actual capabilities are there as of about mid this year. I think someone will make this work and it will be a huge efficiency for the right model/workflow combinations (effectively, being able to run 1T parameter MoE models on GB200 NVL4 at "full speed" if your workload has the right characteristics).

What you are describing would be uselessly slow and nobody does that.

  • I don't load all the MoE layers onto my GPU, and I have only about a 15% reduction in token generation speed while maintaining a model 2-3 times larger than VRAM alone.

    • The slowdown is far more than 15% for token generation. Token generation is mostly bottlenecked by memory bandwidth. Dual channel DDR5-6000 has 96GB/s and A rtx 5090 has 1.8TB/s. See my other comment when I show 5x slowdown in token generation by moving just the experts to the CPU.

      2 replies →

  • llama.cpp has built-in support for doing this, and it works quite well. Lots of people running LLMs on limited local hardware use it.

    • llama.cpp has support for running some of or all of the layers on the CPU. It does not swap them into the GPU as needed.

  • I run the 30B Qwen3 on my 8GB Nvidia GPU and get a shockingly high tok/s.

    • For contrast, I get the following for a rtx 5090 and 30b qwen3 coder quantized to ~4 bits:

      - Prompt processing 65k tokens: 4818 tokens/s

      - Token generation 8k tokens: 221 tokens/s

      If I offload just the experts to run on the CPU I get:

      - Prompt processing 65k tokens: 3039 tokens/s

      - Token generation 8k tokens: 42.85 tokens/s

      As you can see, token generation is over 5x slower. This is only using ~5.5GB VRAM, so the token generation could be sped up a small amount by moving a few of the experts onto the GPU.