Comment by DiabloD3
2 days ago
I suggest figuring out what your configuration problem is.
Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.
2 days ago
I suggest figuring out what your configuration problem is.
Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.
It's not a bug. It's the reality of token generation. It's bottlenecked by memory bandwidth.
Please publish your own benchmarks proving me wrong.
I cannot reproduce your bug on AMD. I'm going to have to conclude this is a vendor issue.