Comment by bityard
9 hours ago
Hi Daniel, I've been using some of your models on my Framework Desktop at home. Thanks for all that you do.
Asking from a place of pure ignorance here, because I don't see the answer on HF or in your docs: Why would I (or anyone) want to run this instead of Qwen3's own GGUFs?
Thanks! Oh Qwen3's own GGUFs also works, but ours are dynamically quantized and calibrated with a reasonably large diverse dataset, whilst Qwen's ones are not - see https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs
I've read that page before and although it all certainly sounds very impressive, I'm not an AI researcher. What's the actual goal of dynamic quantization? Does it make the model more accurate? Faster? Smaller?
More accurate and smaller.
quantization = process to make the model smaller (lossy)
dynamic = being smarter about the information loss, so less information is lost