Comment by DiabloD3
1 day ago
I suggest figuring out what your configuration problem is.
Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.
1 day ago
I suggest figuring out what your configuration problem is.
Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.
It's not a bug. It's the reality of token generation. It's bottlenecked by memory bandwidth.
Please publish your own benchmarks proving me wrong.