Comment by samus
3 months ago
The llama.cpp maintainers working on supporting Qwen3-next are also not enthused by LLM output. They had to go over everything and fix it up.
https://github.com/ggml-org/llama.cpp/pull/16095#issuecommen...
3 months ago
The llama.cpp maintainers working on supporting Qwen3-next are also not enthused by LLM output. They had to go over everything and fix it up.
https://github.com/ggml-org/llama.cpp/pull/16095#issuecommen...
No comments yet
Contribute on Hacker News ↗