← Back to context

Comment by EnPissant

1 day ago

What you are describing would be uselessly slow and nobody does that.

I don't load all the MoE layers onto my GPU, and I have only about a 15% reduction in token generation speed while maintaining a model 2-3 times larger than VRAM alone.

  • The slowdown is far more than 15% for token generation. Token generation is mostly bottlenecked by memory bandwidth. Dual channel DDR5-6000 has 96GB/s and A rtx 5090 has 1.8TB/s. See my other comment when I show 5x slowdown in token generation by moving just the experts to the CPU.

    • I suggest figuring out what your configuration problem is.

      Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.

      1 reply →

I do it with gpt-oss-120B on 24 GB VRAM.

  • You don't. You run some of the layers on the CPU.

    • You're right that I was confused about that.

      LM Studio defaults to 12/36 layers on the GPU for that model on my machine, but you can crank it to all 36 on the GPU. That does slow it down but I'm not finding it unusable and it seems like it has some advantages - but I doubt I'm going to run it this way.

      7 replies →

llama.cpp has built-in support for doing this, and it works quite well. Lots of people running LLMs on limited local hardware use it.

  • llama.cpp has support for running some of or all of the layers on the CPU. It does not swap them into the GPU as needed.

I run the 30B Qwen3 on my 8GB Nvidia GPU and get a shockingly high tok/s.

  • For contrast, I get the following for a rtx 5090 and 30b qwen3 coder quantized to ~4 bits:

    - Prompt processing 65k tokens: 4818 tokens/s

    - Token generation 8k tokens: 221 tokens/s

    If I offload just the experts to run on the CPU I get:

    - Prompt processing 65k tokens: 3039 tokens/s

    - Token generation 8k tokens: 42.85 tokens/s

    As you can see, token generation is over 5x slower. This is only using ~5.5GB VRAM, so the token generation could be sped up a small amount by moving a few of the experts onto the GPU.