Comment by asaddhamani
1 day ago
I find it interesting, how OpenAI came out with a $200 plan, Anthropic did $100 and $200, then Gemini ups it to $250, and now Grok is at $300.
OpenAI is the only one that says "practically unlimited" and I have never hit any limit on my ChatGPT Pro plan. I hit limits on Claude Max (both plans) several times.
Why are these companies not upfront about what the limits are?
Because they want to have their cake and eat it too.
A fair pricing model would be token-based, so that a user can see for each query how much they cost, and only pay for what they actually used. But AI companies want a steady stream of income, and they want users to pay as much as possible, while using as little as possible. Therefore they ask for a monthly or even yearly price with an unknown number of tokens included, such that you will always pay more then with token-based payments.
Personally, I prefer having a fixed, predictable, price rather than paying for usage. There is something psychologically nicer about it to me, and I find myself rationing my usage more when I am using the API (which is effectively what you describe already, just minus the UI).
Yep, this is also why gyms don’t charge you $5 per visit. Nobody would come. Even if it’s cheaper for the average person
I don't think it's that, I think they just want people to onboard onto these things before understanding what the actual cost might be once they're not subsidized by megacorps anymore. Something similar to loss-leading endeavors like Uber and Lyft in the 2010s, I suspect that that showing the actual cost of inference would raise questions about the cost effectiveness of these things for a lot of applications. Internally, Google's data query surface tell you cost in terms of SWE-time (e.g. this query cost 1 SWE hour) since the incentives are different.
you're right, about their intentions in the future. But right now, they are literally losing money every single time someone uses their product...
In most cases, atleast claude does for sure. So yea, for now, they're losing money anyways
You just completely made that up
> Why are these companies not upfront about what the limits are?
Most likely because they reserve the right to dynamically alter the limits in response to market demands or infrastructure changes.
See, for instance, the Ghibli craze that dominated ChatGPT a few months ago. At the time OpenAI had no choice but to severely limit image generation quotas, yet today there are fewer constraints.
Because if you are transparent about the limits, more people will start to game the limits, which leads to lower limits for everyone – which is a worse outcome for almost everyone.
tldr: We can't have nice things, because we are assholes.