Comment by jmorgan
10 days ago
Amazing work. This model feels really good at one-off tasks like summarization and autocomplete. I really love that you released a quantized aware training version on launch day as well, making it even smaller!
10 days ago
Amazing work. This model feels really good at one-off tasks like summarization and autocomplete. I really love that you released a quantized aware training version on launch day as well, making it even smaller!
Thank you Jeffrey, and we're thrilled that you folks at Ollama partner with us and the open model ecosystem.
I personally was so excited to run ollama pull gemma3:270b on my personal laptop just a couple of hours ago to get this model on my devices as well!
> gemma3:270b
I think you mean gemma3:270m - Its Dos Comas not Tres Comas
Maybe it's 270m after Hooli's SOTA compression algorithm gets ahold of it
Ah yes thank you. Even I still instinctively type B
1 reply →