Comment by alok-g
3 days ago
I have heard about this approach elsewhere too. Could you please provide some more details on the set up steps and usage approach. I would like to replicate. Thanks.
3 days ago
I have heard about this approach elsewhere too. Could you please provide some more details on the set up steps and usage approach. I would like to replicate. Thanks.
I simply ask Claude Sonnet, using claudecode, to use opencode. That's it! Example:
There are a couple of decent approaches to having a planning/reviewer model set (eg. claude, codex, gemini) and an execution model (eg. glm 4.6, flash models, etc) workflow that I've tried. All three of these will let you live in a single coding cli but swap in different models for different tasks easily.
- claude code router - basically allows you to swap in other models using the real claude code cli and set up some triggers for when to use which one (eg. plan mode use real claude, non plan or with keywords use glm)
- opencode - this is what im mostly using now. similar to ccr but i find it a lot more reliable against alt models. thinking tasks go to claude, gemini, codex and lesser execution tasks go to glm 4.6 (on ceberas).
- sub-agent mcp - Another cool way is to use an mcp (or a skill or custom /command) that runs another agent cli for certain tasks. The mcp approach is neat because then your thinker agent like claude can decide when to call the execution agents, when to call in another smart model for a review of it's own thinking, etc instead of it being explicit choice from you. So you end up with the mcp + an AGENTS.md that instructs it to aggressively use the sub-agent mcp when it's a basic execution task, review, ...
I also find that with this setup just being able to tap in an alt model when one is stuck, or get review from an alt model can help keep things unstuck and moving.
RooCode and KiloCode also have an Orchestrator mode that can create sub-tasks and you can specify which model to use for what - and since they report their results back after finishing a task (implement X, fix Y), the context of the more expensive model doesn’t get as polluted. Probably one of the most user friendly ways to do that.
A simpler approach without subtasks would be to just use the smart model for Ask/Plan/whatever mode and the dumb but cheap one for the Code one, so the smart model can review the results as well and suggest improvements or fixes.