Comment by haiku2077
6 days ago
> How does that work exactly? Do you have a link?
https://ollama.com lets you run models on your own hardware and serve them over a network. The you point your editor at that server, eg https://zed.dev/docs/ai/configuration#ollama
Don't use Ollama, use llama.cpp instead.