Comment by est
2 days ago
> quantized a 100B parameter model to 1 trit
I had the same question, after some debates with Chatgpt, it's not the "quantize" for post-training we often witness these days, you have to use 1 trit in the beginning since pre-train.
No comments yet
Contribute on Hacker News ↗