Comment by vorticalbox
3 days ago
> some of the cutting edge local LLMs have been a little bit slow to be available recently
You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it
3 days ago
> some of the cutting edge local LLMs have been a little bit slow to be available recently
You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it
I know, I often do that, but it's still not enough. E.g. things like SmolLM3 which required some llama ccp tweaks wouldn't work via guff for the first week after it had been released.
Just checked: https://github.com/ollama/ollama/issues/11340 still open issue.