← Back to context

Comment by mapontosevenths

8 hours ago

I'm just some guy on hackernews, but I actually did try this on my DGX Spark. I went back to Gemma 4 after a few rounds. My orchestration model kept having to send the Qwen model back to fix mistakes that Gemma wouldn't have made. I wound up with less working code per hour due to the mistakes.

Technically, I use OpenWebUI with Ollama, so I used the weights below, but it should be the same.

https://ollama.com/kwangsuklee/Qwen3.5-27B-Claude-4.6-Opus-R...

I'd be super interested to hear about your workflow with OpenWebUI. I haven't figured out how to use it for anything other than the basic chatbot UI. I haven't been able to hook anything else into it

  • What I said above was a bit confused. What I've actually done is connect OpenCode and OpenWebUI both to Ollama. I just use OpenWebUI to manage the models and for testing/etc. Once you have it working it's very nice. You can pull a new model just by typing the name and waiting while it downloads, etc.

    Connecting Ollama to OpenCode and OpenWebUI is relatively trivial. In OpenWebUI there's a nice GUI. In OpenCode You just edit the ~/.config/opencode/opecode.json to look something like this. The model names have to match the ones you seen in OpenWebUI, but the friendly "name" key can be whatever you need to be able to recognize it.

      {
        "$schema": "https://opencode.ai/config.json",
        "provider": {
       "ollama": {
         "npm": "@ai-sdk/openai-compatible",
         "name": "Ollama",
         "options": {
        "baseURL": "http://localhost:11434/v1"
         },
         "models": {
        "qwen3.5:122b": {
          "name": "Qwen 3.5 122b"
        },
        "qwen3-coder:30b": {
          "name": "Qwen 3 Coder"
        },
        "gemma4:26b": {
          "name": "Gemma 4"
        }
         }
       }
        }
      }