← Back to context

Comment by __mharrison__

4 hours ago

I never really used Codex (found it to slow) just 5.2, which I going to be an excellent model for my work. This looks like another step up.

This week, I'm all local though, playing with opencode and running qwen3 coder next on my little spark machine. With the way these local models are progressing, I might move all my llm work locally.

I think codex got much faster for smaller tasks in the last few months. Especially if you turn thinking down to medium.

I think the slow feeling is a UI thing in codex

  • I realize my comment was unclear. I use codex the CLI all the time, but generally with this invocation: `codex --full-auto -m gpt-5.2`

    However, when I use the 5.2codex model, I've found it to be very slow and worse (hard to quantify, but I preferred straight-up 5.2 output).