Comment by A_D_E_P_T
16 hours ago
Obvious political reasons and implications aside, a clear quality gap opened up late last year when Opus 4.5 was released vs. GPT-5. Opus was obviously and demonstrably superior to any GPT-5 tier. The release of GPT-5.2 didn't improve matters, and then Opus 4.6 widened the gap further. Right now talking to GPT-5.2 Pro is 10x slower than chatting with Opus 4.6 and the output returned is, nevertheless, generally lower quality and more "sloppy."
What I'm getting at is that this could be, in part, because Claude is genuinely better at this point in time.
I just cancelled my OpenAI $200 sub yesterday because of all this, but sadly I can't agree.
Codex 5.3 Xhigh > Opus 4.6 in my work to this point.
Hoping for Opus 4.7 or whatever comes next to rectify this as I'm a bit annoyed over having to drop to a lower quality model.
Weirdly enough, I agree with both sides. Opus beats every version of GPT 5 as a chat interface, hands down. ChatGPT, at this point, is mostly me correcting its output style, cadence, behavior, etc, and consistently remaining dissatisfied, meanwhile Opus one-shots things I didn’t even think it could (Typst code). All that said, I do my programming in OpenAI’s Codex app for Mac. It has completely dominated Claude Code for me. I’ll only ever use Opus to check 5.3-Codex’s work. Very weird world we’re living in. I hope it gets even weirder once Deepseek does whatever they’ve been cooking.
What where you using it for? claude is really good at agentic stuff, Pure coding, I can see codex being better, but for the entire workflow, I'm not sure
I use Codex purely for coding, and that's 90% of my use case for AI in general (10% using ChatGPT web for misc stuff). I pop out to Opus in Claude Code regularly to try to stay up on their relative performance, but so far the primary value I've been able to derive from CC is as a second set of eyes for code review / poking holes in plans. For primary planning / debugging / implementation Codex outclasses it atm sadly.
For coding, I agree, Codex-5.3 is the best out there.
But for the chat, I feel like ChatGPT got worse and worse.
I use Opus 4.6 Fast-mode. It produces significantly better results in my work than any Codex 5.3 tier.
Me too. It's great that my employer pays for it and there's basically no budget, because this configuration is 10x more expensive than the regular default Sonnet.
Rapid iteration would possibly make up for the drop in quality, but I can't afford to use fast mode as I'm a contractor and pay for my own AI usage :(
Agree on the gap - in my own complex greenfield software dev spec test, opus 4.6 blows codex 5.3 out of the water, by wide margin, both in ui and backend.
Massivly better and I cannot understand how many comments online say that they're comparable (other than paid actors which now fits the right wing angle that OpenAI takes because right wing paid online comments seems quite common overall).
I remember on the Opus 4.5 release data watching what it can do to my test app I wanted it to build and saying outloud to myself "oh shit" because of how much better it was at the conversation, planning, understanding, and building. Posts like this[0] say similar things, where Opus 4.5 release + Claude Code was the tipping point and the gap is widening and Anthropic has infinite more momentum and going in the better direction with useful models that aren't fully aligned with bad actors.
[0] https://news.ycombinator.com/item?id=46515696
Yep. For the past month I’ve been doing this thing where every time I need something from AI, I give Opus and Codex the same prompt. Opus is just better by a wide margin, especially on complex tasks. It uses tools quite a lot, taps into available MCP servers when it makes sense, and can think about repercussions down the line much better. Codex I feel is optimized for brevity, approaching terseness. Hard to put my finger on it but it’s never as thorough and it always misses important details.
no, it is because of the public perspection of Anthropic holding a principled stance against allowing their software to pull the trigger and kill humans. ChatGPT still has the bigger brand name recognition.
Anyone who's used both Claude and ChatGPT will instantly agree what is better by a large margin. Theres maybe a brand recognition long tail but its more likely theyre the rare occasional users who use the free tier. Thus ChatGPT is becoming the shitty free AI app while Claude is what you use to get real work done. Time (in months) will tell yow this will go.
If that's entirely the case, there could still be interesting implications, as people who switch to Claude are unlikely to switch back to ChatGPT in the near future. (If, that is, they regularly use LLMs for any technical or professional task.)
Open AI had first mover advantage.
Sam squandered it.
I guarantee you the people downloading these apps aren't thinking about that. They use what works best.
How would most people know what works best? Most people are only using one.
https://old.reddit.com/comments/1rh60py
https://www.windowscentral.com/artificial-intelligence/cance...