← Back to context

Comment by EnPissant

17 hours ago

The slowdown is far more than 15% for token generation. Token generation is mostly bottlenecked by memory bandwidth. Dual channel DDR5-6000 has 96GB/s and A rtx 5090 has 1.8TB/s. See my other comment when I show 5x slowdown in token generation by moving just the experts to the CPU.

I suggest figuring out what your configuration problem is.

Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.

  • It's not a bug. It's the reality of token generation. It's bottlenecked by memory bandwidth.

    Please publish your own benchmarks proving me wrong.