Comment by diggan
3 days ago
> Ollama is set on becoming the standard interface for companies to deploy "open" models.
That's not what I've been seeing, but obviously my perspective (as anyone's) is limited. What I'm seeing is deployments of vLLM, SGLang, llama.cpp or even HuggingFace's Transformers with their own wrapper, at least for inference with open weight models. Somehow, the only place where I come across recommendations for running Ollama was on HN and before on r/LocalLlama but not even there as of late. The people who used to run Ollama for local inference (+ OpenWebUI) now seem to mostly be running LM Studio, myself included too.
No comments yet
Contribute on Hacker News ↗