← Back to context

Comment by spmurrayzzz

7 days ago

> I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support

Almost from day one of the project, I've been able to use local models. Llama.cpp worked out of the box with zero issues, same with vllm and sglang. The only tweak I had to make initially was manually changing the system prompt in my fork, but now you can do that via their custom modes features.

The ollama support issues are specific to that implementation.