Comment by arjie
14 hours ago
Oh that's interesting. Right after they signed the deal with Amazon so maybe it was all compute constrained. In any case, I tried using the Codex $20/mo plan and the limits are so low I can hardly get anywhere before my agent swaps to a different agent.
Somewhat suspicious that if I do this without an official Anthropic notice I'll lose my precious Max $200/mo account so I'll sit tight perhaps for a while.
Wait, how?
I had an idea on a whim to vibe-engineer an irccloud replacement for myself.
Started with claude web + Opus 4.7 and continued with Claude Code. Ate up two full cycles of my quota in maybe 6-10 prompts.
Then I iterated on that with pi.dev+codex for HOURS, managed to use 50% of my Codex Pro subscription.
Yeah, I tried Codex pro today and the $20 plan is way more generous than Claude's, especially lately.
I've had the cheapest personal tier for both since forever and I think I've run out of Codex quota _once_.
With Claude it's a constant battle of typing /usage after every iteration and trying to guess if it's enough for the next task or not =)
Consider Z.ai if you need "bulk" usage, GLM is now very good. They still have the occasional API brown out however.
I used to use GLM mostly and had a Claude Pro subscription for occasional review and clean up.
Now I just use GLM.
I do think Claude Max is value for money. But it's more value than I personally need and I like Anthropic less and less.
Naive question but are you not afraid z.ai will train on your personal data?
FAANG already did this all the time isn't it? Regardless of their policy. US is no better than China from my point of view. In this case, I see no difference between sending my prompts to US or China companies. At least China models are open source.
1 reply →
I accept that all the providers will do what I would consider unethical with my data and simply don't expose what I don't consider a price of doing the business I want.
The other criticism I see is "ask it what happened in 1989" but as a my use case isn't writing a high school history essay I simply don't care. Or believe one should seek those kind of answers from any AI. (If you're curious it simply cuts off the reply).
I fully appreciate that YMMV and what sits right for others will not align with what's acceptable to me. Anthropic and OpenAI both are in my badbooks as much as Z.ai. pick your poison as they say.
They said from the beginning it was compute constraint and that OpenClaw was causing way more usage than they could handle
GPT-5.4 brutally consumptive for sure. It's not very verbal, but gpt-5.3 codex is wildly smart about coding & planning, and way way less token hungry.