Comment by infecto
7 hours ago
Honestly that’s a detail far removed from the discussion. Folks are surprised they cannot use something that would obviously be against the T&Cs.
7 hours ago
Honestly that’s a detail far removed from the discussion. Folks are surprised they cannot use something that would obviously be against the T&Cs.
Everyone knows no one reads terms and that it isn't feasible for a normal person to do so, so I don't know why it would "obviously" be against them to anyone. If you're paying for a subscription with known limits, you'd expect you can use up to those limits. It's no more obvious to me than if you used the API token and got banned for using another client, or if a website decided to ban Firefox users.
I just fail to see your argument. You are paying for Claude code or Antigravity. Not for the raw underlying compute. It’s not about reading T&Cs but the expectation is just because you are paying for a service does not give you the right to freely use the API however you want. Hence why I said it really reminds me of a private vs public API. Don’t be surprised if you get shutout of the private API. All subscriptions are bound by acceptable use.
Maybe I am out of touch but I struggle why folks are surprised by this. I would argue that banning accounts is probably too harsh but we will see if that is a short term remedy.
There is a reason that in general the cost of a token via API is more expensive than when using the consumer tool.
I wouldn't expect consumers to even be aware that API keys exist, much less know the pricing differences. When I go to the Google One plans page, it just says I get all these AI things with higher limits. Then there's some tools that can use my account to do cool stuff. I wouldn't expect that a program that's logging into an AI service that I pay for as me to do AI things is it all untoward? No more than running a bot that just did high level control and delegated to their specific program (which is what all of this AI stuff and really software in general is about: automating whatever you're doing). Or when I give codex an auth token to use Jira or Gitlab. I expect that's the intended purpose of the auth token: let me perform whatever actions I need to do that I'm authorized to do within whatever limits the service sets.
Literally the entire buzz around all this AI stuff is that it lets you automate stuff and do more things faster. Why would you not expect people to automate their interactions with the AI service itself? AI automating its own interactions with itself is what all the AI companies are pushing as the immediate future and paradigm shift for everyone to hop onto.