← Back to context

Comment by vunderba

4 days ago

So in my experience smaller models tend to produce worse results BUT I actually got really good transcription cleanup with CoT (Chain of Thought models) like Qwen even quantized down to 8b.

I think the 8B+ question was about parameter count (8 billion+ parameters), not quantization level (8 bits per weight).

  • Yeah I should have been more specific - Qwen 8b at a 5_K_M quant worked very well.