Comment by mapontosevenths

4 hours ago

What I said above was a bit confused. What I've actually done is connect OpenCode and OpenWebUI both to Ollama. I just use OpenWebUI to manage the models and for testing/etc. Once you have it working it's very nice. You can pull a new model just by typing the name and waiting while it downloads, etc.

Connecting Ollama to OpenCode and OpenWebUI is relatively trivial. In OpenWebUI there's a nice GUI. In OpenCode You just edit the ~/.config/opencode/opecode.json to look something like this. The model names have to match the ones you seen in OpenWebUI, but the friendly "name" key can be whatever you need to be able to recognize it.

  {
    "$schema": "https://opencode.ai/config.json",
    "provider": {
   "ollama": {
     "npm": "@ai-sdk/openai-compatible",
     "name": "Ollama",
     "options": {
    "baseURL": "http://localhost:11434/v1"
     },
     "models": {
    "qwen3.5:122b": {
      "name": "Qwen 3.5 122b"
    },
    "qwen3-coder:30b": {
      "name": "Qwen 3 Coder"
    },
    "gemma4:26b": {
      "name": "Gemma 4"
    }
     }
   }
    }
  }