Comment by mijoharas
6 hours ago
> llama.cpp is designed to rapidly adopt research-level optimisations and features, but the downside is that reported speeds change all the time
Ironic that (according to the article) ollama rushed to implement GPT-OSS support, and thus broke the rest of the gguf quants (iiuc correctly).
No comments yet
Contribute on Hacker News ↗