I might flip that given how hard it's been for Claude to deal with longer context tasks like a coding session with iterations vs a single top down diff review.
I have a `codex-review` skill with a shell script that uses the Codex CLI with a prompt. It tells Claude to use Codex as a review partner and to push back if it disagrees. They will go through 3 or 4 back-and-forth iterations some times before they find consensus. It's not perfect, but it does help because Claude will point out the things Codex found and give it credit.
I might flip that given how hard it's been for Claude to deal with longer context tasks like a coding session with iterations vs a single top down diff review.
Then I pass the review back to Claude Opus to implement it.
Just curious is this a manual process or you guys have automated these steps?
I have a `codex-review` skill with a shell script that uses the Codex CLI with a prompt. It tells Claude to use Codex as a review partner and to push back if it disagrees. They will go through 3 or 4 back-and-forth iterations some times before they find consensus. It's not perfect, but it does help because Claude will point out the things Codex found and give it credit.
2 replies →
zen-mcp (now called pal-mcp I think) and then claude code can actually just pass things to gemini (or any other model)
Sometimes, depends on how big of a task. I just find 5.2 so slow.
I have Opus 4.5 do everything then review it with Gemini 3.