Comment by yorwba
12 hours ago
It's a good-per-byte-but-not-in-absolute-terms quantization of Qwen3-8B that's comparable in accuracy to Qwen3.5-4B at 4-bit quantization (which makes the 4B model larger in terms of storage, though the lower number of parameters and hybrid attention give it a speed advantage if you're not bottlenecked on memory bandwidth for the model weights.)
No comments yet
Contribute on Hacker News ↗