← Back to context

Comment by nl

2 days ago

Strong disagree on this. Ollama is great for moderately technical users who aren't really programmers or proficient with the command line.

You can disagree all you want, but Ollama does not keep their llama.cpp vendored copy up to date, and also ships, via their mirror, completely random badly labeled models claiming to be the upstream real ones, often misappropriated from major community members (Unsloth, et al).

When you get a model offered by Ollama's service, you have no clue what you're getting, and normal people who have no experience aren't even aware of this.

Ollama is an unrestricted footgun because of this.

  • I thought the models were like HuggingFace, where anyone can upload a model and you choose which you pull. The Unsloth ones look like this to me, eg: https://ollama.com/secfa/DeepSeek-R1-UD-IQ1_S

    • Ollama themselves upload models to the mirror, and often mislabel them.

      When R1 first came out, for example, their official copy of it was one of the distills labeled as "R1" instead of something like "R1-qwen-distill". They've done this more than once.

  • Not the footgun you think it is. Ollama comes with a few things that make it convenient for casual users.