Comment by gpm

19 hours ago

Huh, cool. I guess that makes a lot of sense with all the success the quantization people have been having.

So am I misunderstanding "Tensor type F32 · I32 · BF16" or is it just tagged wrong?

The MoE experts are quantized to int4, all other weights like the shared expert weights are excluded from quantization and use bf16.