Comment by radarsat1

3 days ago

I'm curious if 1-bit params can be compared to 4- or 8-bit params. I imagine that 100B is equivalent to something like a 30B model? I guess only evals can say. Still, being able to run a 30B model at good speed on a CPU would be amazing.

At some point you hit information limits. With conventional quantisation you see marked capability fall-off below q5. All else being equal you'd expect an N-parameter 5-bit quant to be roughly comparable to a 3N-parameter ternary, if they are trained to the same level, just in terms of the amount of information they can possibly hold. So yes, 100B ternary would be within the ballpark of a 30B q5 conventional model, with a lot of hand-waving and sufficiently-smart-training

  • I assume that theoretically, 1-bit models could be most efficient because modern models switched from 32 bit to 16 bit to 8 bit per parameter (without quantization).

    • It's not clear where the efficiency frontier actually is. We're good at measuring size, we're good at measuring FLOPS, we're really not very good at measuring capability. Because of that, we don't really know yet whether we can do meaningfully better at 1 bit per parameter than we currently get out of quantising down to that size. Probably, is the answer, but it's going to be a while before anyone working at 1 bit per param has sunk as many FLOPS into it as the frontier labs have at higher bit counts.

      1 reply →