Comment by labcomputer
6 days ago
There were some benchmarks a few years ago from, IIRC, the people behind either llama.cpp or Ollama (I forget which).
The basic rule of thumb is that more parameters is always better, with diminishing returns as you get down to 2-3 bits per parameter. This is purely based on model quality, not inference speed.
No comments yet
Contribute on Hacker News ↗