← Back to context

Comment by 0xbadcafebee

10 hours ago

Couple thoughts:

- The t/s estimation per machine is off. Some of these models run generation at twice the speed listed (I just checked on a couple macs & an AMD laptop). I guess there's no way around that, but some sort of sliding scale might be better.

- Ollama vs Llama.cpp vs others produce different results. I can run gpt-oss 20b with Ollama on a 16GB Mac, but it fails with "out of memory" with the latest llama.cpp (regardless of param tuning, using their mxfp4). Otoh, when llama.cpp does work, you can usually tweak it to be faster, if you learn the secret arts (like offloading only specific MoE tensors). So the t/s rating is even more subjective than just the hardware.

- It's great that they list speed and size per-quant, but that needs to be a filter for the main list. It might be "16 t/s" at Q4, but if it's a small model you need higher quant (Q5/6/8) to not lose quality, so the advertised t/s should be one of those

- Why is there an initial section which is all "performs poorly", and then "all models" below it shows a ton of models that perform well?