← Back to context

Comment by sroussey

1 day ago

True, but their research did include running on 5080 local.

The big take away, in my opinion, is that their technique for LUTs etc could also be applied to lossy quants as well. Say maybe you get 5bit accuracy in size of 4bit?

I don’t know, but maybe? Also their two stage design might make current quantized you kernal designs better.

Yes, it could be stacked on quants. It might be that quantized activations already are more "dense" and so they can't be compressed as much (from 16 -> ~11 bits), but certainly possible.

  • I read it similarly - that this is a specific attribute of bfloat16, so the quants folks tend to run on local hardware don't have the same inefficiency to exploit