Comment by nodja
2 days ago
> As things have shifted more towards mass consumption of model weights it's become less and less common to see.
Not the real reason. The real reason is that training has moved to FP/BF16 over the years as NVIDIA made that more efficient in their hardware, the same reason you're starting to see some models being released in 8bit formats (deepseek).
Of course people can always quantize the weights to smaller sizes, but the master versions of the weights is usually 16bit.
No comments yet
Contribute on Hacker News ↗