Comment by FrasiertheLion
2 months ago
Ollama does heavily quantize models and has a very short context window by default, but this has not been my experience with unquantized, full context versions of Llama3.3 70B and particularly, Deepseek R1, and that is reflected in the benchmarks. For instance I used Deepseek R1 671B as my daily driver for several months, and it was at par with o1 and unquestionably better than GPT-4o (o3 is certainly better than all but typically we've seen opensource models catch up within 6-9 months).
Please shoot me an email at tanya@tinfoil.sh, would love to work through your use cases.
Hey Tanya! Thank you for helping me understand the results better.
I just posted the results of another basic interview analysis (4o vs. Llama4) here: https://x.com/SpringStreetNYC/status/1923774145633849780
To your point: Do I understand correctly that, for example, by running the default model of Llama4 via ollama, the context window is very short even when the model's context is, like 10M. In order to "unlock" the full context version, I need to get the unquantized version.
For reference, here's what `ollama show llama4` returns: - parameters 108.6B # llama4:scount - context length 10485760 # 10M - embedding length 5120 - quantization Q4_K_M