Comment by paolomainardi
1 day ago
Not strictly related, but since Copilot could be the next to violate the TOS, I've asked for an official response here: https://github.com/orgs/community/discussions/183809. If someone can help raise this question, it's more than welcome.
Copilot in VSCode is integrated with VSCode's LLM provider API. Which means that any plugins that needs LLM capabilities can submit request to Copilot. Roo Code support that as an option. And of course, there's a plugin that start an OpenAI/Anthropic-compatible web server inside VSCode that just calls to the LLM provider API. It seems that if you use unlimited models (like GPT-4.1) you probably get unlimited API calls. However, those model doesn't seems to be very agentic when used in Claude Code.
GitHub doesn't offer any unlimited style AI model plans so I don't think they'll care. Their pricing is fairly aligned with their costs.
This only affects Claude as they try to market their plan as unlimited with various usage rate limits but its clearly costing them a lot more than what they sell it for.
Copilot plan limits are however "per prompt", and prompts that ask the agent to do a lot of stuff with a large context are obviously going to be more expensive to run than prompts that don't.