← Back to context

Comment by cjbprime

1 year ago

> I ran all the open models (anything not from OpenAI, meaning anything that doesn’t start with gpt or o1) myself using Q5_K_M quantization, whatever that is.

It's just a lossy compression of all of the parameters, probably not important, right?

Probably important when competing against undecimated ones from OpenAI

  • Notably: there were other OpenAI models that weren't quantize, but also performed poorly.