← Back to context

Comment by vorticalbox

3 days ago

> some of the cutting edge local LLMs have been a little bit slow to be available recently

You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it