Comment by sheepscreek
3 days ago
Not for Codex. Not even for Gemini/Antigravity! I am truly shocked by how much mileage I can get out of them. I recently bought the $200/mo OpenAI subscription but could barely use 10% of it. Now for over a month, I use codex for at least 2 hrs every day and have yet to reach the quota.
With Gemini/Antigravity, there’s the added benefit of switching to Claude Code Opus 4.5 once you hit your Gemini quota, and Google is waaaay more generous than Claude. I can use Opus alone for the entire coding session. It is bonkers.
So having subscribed to all three at their lowest subscriptions (for $60/mo) I get the best of each one and never run out of quota. I’ve also got a couple of open-source model subscriptions but I’ve barely had the chance to use them since Codex and Gemini got so good (and generous).
The fact that OpenAI is only spending 30% of their revenue on servers and inference despite being so generous is just mind boggling to me. I think the good times are likely going to last.
My advise - get Gemini + Codex lowest tier subscriptions. Add some credits to your codex subscription in case you hit the quota and can’t wait. You’ll never be spending over $100 even if you’re building complex apps like me.
> I recently bought the $200/mo OpenAI subscription but could barely use 10% of it
This entire comment is confusing. Why are you buying the $200/month plan if you’re only using 10% of it?
I rotate providers. My comment above applies to all of them. It really depends on the work you’re doing and the codebase. There are tasks where I can get decent results and barely make the usage bar move. There are other tasks where I’ve seen the usage bar jump over 20% for the session before I get any usable responses back. It really depends.
I got it to try Atlas, their agentic browser, before it was open to Plus users. I convinced myself that I could use the additional capacity to multi-task and push through hard core problems without worrying about quota limits.
For context, this was a few months ago when GPT 5 was new and I was used to constantly hitting o3 limits. It was an experiment to see if the higher plan could pay for itself. It most certainly can but I realized that I just don’t need it. My workflow has evolved into switching between different agents on the same project. So now I have much less of a need for any one.
To use up the Pro tier plan you must close the loop so to speak - so that Codex knows how to test the quality of its output and incrementally inch toward its goals. This can be harder or easier depending on your project.
You should also queue up many "continue ur work" type messages.
4 replies →
Not the same poster, but apparently they tried the $200/mo subscription, but after seeing they don't need it, they "subscribed to all three at their lowest subscriptions (for $60/mo)" instead.
> but apparently they tried the $200/mo subscription, but after seeing they don't need it
This is why it’s confusing, though. Why start with the highest plan as the starting point when it’s so easy to upgrade?
2 replies →
> I rotate providers. My comment above applies to all of them. It really depends on the work you’re doing and the codebase. There are tasks where I can get decent results and barely make the usage bar move. There are other tasks where I’ve seen the usage bar jump over 20% for the session before I get any usable responses back. It really depends.
Ah, I missed this part. Yes, this is basically what I would recommend today as well. Buy a couple of different frontier model provider basic subscriptions. See which works better on what problems. For me, I use them all. For someone else it might be codex alone. Ymmv but totally worth exploring!
I do the same and agree this works well.
It's worth noting that the Claude subscription seems notably less than the others.
Also there are good free options for code review.
My first try at LLM coding was with Claude, got back confusing results for a hello world++ type test and ran out of credits in a couple of hours, asked for a refund all the same day. I'm slowly teaching myself prompt engineering on qwen3-coder, it goes in circles much like claude was, but at least it's doing that at the cost of electricity at the wall, I already had a GPU.
Can confirm this is the way right now