Comment by hx8
16 hours ago
I really dislike these AI middleman plans. The value-add that Microsoft brings to Github Copilot is near zero compared to directly buying from Anthropic or OpenAI, where 99% of the value is being delivered from. I don't understand why anyone would want to deal with Microsoft as a vendor if they don't have to. The short period of discounted usage was always the obvious rug pull.
> I don't understand why anyone would want to deal with Microsoft as a vendor if they don't have to.
It can bill to our Azure sub and I don't have to go through the internal bureaucracy of purchasing a new product/service from a new vendor.
I would also add that the models they supply through Azure Foundry are covered under my employer's existing customer agreement, by which MS is not allowed to train models on our data (which might include IP of the company or its clients). For organizations worried about that, it's nice & cozy.
They just altered this deal for everyone else. Wonder how long they will wait before default opting you all into training too?
Bingo. Github Copilot is mostly for organizations that have an existing Azure bill and would rather see that go up then get a new vendor bill. Professional middlemen.
This is pretty straightforward compared to the giant universe of companies that resell Microsoft services.
The number of intermediaries that some customers, especially governmental agencies, go through to get just an Azure bill can be wild...
2 replies →
If you’ve ever had to be part of the frankly batshit insane procurement process that some organizations force you to gauntlet through, it becomes a very obvious and appealing option to do this
[flagged]
6 replies →
Ah, the AWS Marketplace procurement model, where products mostly exist so that you can line item things through Amazon rather than going through a lengthy procurement process
Not surprised to see this is common. At my company basically everyone and their mother are using Claude Code via Bedrock, despite us having company-wide Windsurf, Copilot and ChatGPT Enterprise accounts
1 reply →
It's understandable but sad that this will often be the reason.
It’s got pretty good integration into vscode and you can bypass key anyway
Microsoft's USP in one sentence.
The value-add that Microsoft brings is checking the boxes that you want checked.
If you need some random Egyptian government compliance certification for your vendors or whatever, Microsoft probably has that, Anthropic probably doesn't. Microsoft's (as well as Oracle's) entire deal these days is figuring out what customers care about compliance-wise, and structuring their offerings to deliver exactly that. Whether they're selling their own products, or re-selling somebody else who doesn't have that kind of global footprint and clout, is secondary at best.
I disagree. I like the standard interface, being able to easily switch models as things invariably change from week to week, and having a relationship with one company. That's why I'm a big fan of openrouter and Cursor. Not too much experience with Copilot, but I think there's a huge value add in AI middlemen.
Because if you’re a vscode user up until a couple days ago you could hammer Opus 4.6 all day every day and pay nowhere close to the Claude Max plan. Many people exploited this and the subsidy is closing.
Exactly, it was just simply much cheaper and perfect for my usecase.
Just use claude code directly with a pro plan instead of copilot for roughly the same cost.
On wait, nevermind.
https://news.ycombinator.com/item?id=47855565
The Anthropic Pro plan cost double and gave you, I don't know, a tenth the usage, depending on how efficiently you used Copilot requests, and no access to a large set of models including GPT and Gemini and free ones.
> Just use claude code directly with a pro plan
Usage limits are/were higher in Copilot. They also charge per prompt, not per token.
3 replies →
Yeah this was me. I just got a message that I hit my limit and now I am looking into what it takes to run Qwen on local hardware.
A suggestion: Don't invest in any new hardware to run an LLM locally until you've tried the model for a while through OpenRouter.
The Qwen models are cool, but if you're coming from Opus you will be somewhere between mildly to very disappointed depending on the complexity of your work.
1 reply →
Been having a ton of fun with copilot cli directed to local qwen 3.6. If you’re willing to increase the amount of specificity in your prompts then delegating from a GPT-5.4 or Opus to local qwen has been great so far.
I have to say this was how I used GitHub copilot in vscode. I Used opus 4.6 for most tasks. I am not sure I want to keep my copilot plan now.
Opus 4.6 is no longer available and Opus 4.7 chews through monthly limits with reckless abandon. The value-add of GH Copilot is basically gone (at least for individuals on the Pro or Pro+ plans.)
Good, I hope Microsoft lost a lot of money in the deal.
From a friend in GitHub: they've been burning so much money because of Opus.
You are not their target audience.
The value add is the GitHub integration. By far the best.
GH has cloud agents that can be kicked off from VS Code; deeply integrated with GH and very easy to set up. You can apply enterprise policies on model access, MCP white lists, model behavior, etc. from GitHub enterprise and layered down to org and repo (multiple layers of controls for enterprises and teams). It aggregates and collects metrics across the org.
It also has tight integration with Codespaces which is pretty damn amazing. `gh codespace code` and it's an entire standalone full-stack that runs our entire app on a unique URL and GH credentials flow through into the Codespace so everything "just works". Basically full preview environments for the full application at a unique URL conveniently integrated into GH. But also a better alternative to git worktrees. This is a pretty killer runtime environment for agents because you can fully preview and work on multiple streams at once in totally isolated environments.
If you are a solo engineer, none of this is relevant and probably doesn't make sense (except Codespaces, which is pretty sweet in any case), but for orgs using the GH stack is a huge, huge value add because Microsoft is going to have a better understanding of enterprise controls.
If you want to understand the value add of Copilot, I think you need to spend a bit of time digging into the enterprise account featureset in GH, try Codespaces, try Copilot cloud agents. Then it clicks.
> The value-add that Microsoft brings to Github Copilot is near zero compared to directly buying from Anthropic or OpenAI
Over here in the EU, we need to store sensitive data in an EU server. Anthropic only offers US-hosted version of their models, while G-cloud and Azure has EU based servers.
> I don't understand why anyone would want to deal with Microsoft as a vendor if they don't have to.
This is about personal plans. Github Copilot is half the price of any competition I found.
It's just a decent deal for light users.
Copilot was there in AI based development first with tab completions.
Now, it may be the right call to immediately give up and shutdown after Opus 4.5, but models and subscriptions are in flux right now, so the right call is not at all obvious to me.
The agentic AI models could be commoditized, some model may excel in one area of SWE, while others are good for another area, local models may be at least good enough for 80%, and cloud usage could fall to 20%, etc. etc.
Staying in the market and providing multi-model and harness options (Claude and Codex usable in Copilot) is good for the market, even if you don't use it.
I found the Copilot harness generally more buggy/disfunctional. After seeing a "long" agent response get dropped (still counts against usage of course) too many times I gave up on the product.
It doesn't matter how competent the actual model is, or how long it's able to operate independently, if the harness can't handle it and drops responses. Made me think are they even using their own harness?
At least Anthropic is obviously dogfooding on Claude Code which keeps it mostly functional.
I only ever used Copilot through OpenCode and for a while it was a crazy good deal. Quite possibly two orders of magnitude cheaper than API credits.
It was great while it lasted.
I exclusively use prepaid OAI tokens when doing copilot work in visual studio. It's really easy to set up a "custom" model. The consistency is hard to beat and I can use the latest model on day one. I also get to see how the magic happens in my provider logs. Every token accounted for.
It was so much cheaper! I subscribed with the monthly plan instead of the yearly one thinking that the deal won’t last. It has last a bit longer than expected.
I'm fine with it seeing as I can use my student email and get free usage
I don't know what they have done to Claude, but when using through copilot it's truly awful compared to using it straight from the API.
I have always just used the API, but I decided to give copilot a go on the weekend because of the cheap price. And I am seeing weird behavior like I have never seen before... It will somehow fail to use the file editing tool and then spend an absolutely huge amount of time/tokens building a python script to apply the edit in a sub process... And it will spin it's wheels on stuff the API routinely just gets right in one shot.
This might have been bad timing. Copilot API broke things last weekend with caused a lot of tool calls in various agent harnesses to start failing like the edit tool.
Example zed issue https://github.com/zed-industries/zed/issues/54219?issue=zed...
access to all of the latest and greatest models for half the price of a single company's basic plan is (or rather, was) a very compelling option
The value add for me is that I can use the web UI to start chatting about and drafting stuff on my phone while I'm commuting to work.
because I can swap multiple models at the same time and ask them to rubber duck against each other ? if anything I'd like more models in github
one subscription for access to most of the models..
I was accounting for that in the 1% of value. I don't see a ton of value in this for development, you end up just always using the smartest model, with maybe tuning subagents to slightly dumber but much faster model. You really only need one subscription to the provider of the smartest model, with maybe 30 minutes of setup time to switch over if SOTA ever switches back to OpenAI.
Except Copilot doesnt bill you per token like all those companies do, they bill you per prompt, at least Copilot in Visual Studio 2026 which is insane to me, are they just hosting all those models and able to reduce costs of doing so?
No they are taking the massive L. Thats why they paused new sign ups.
Just for context to the insanity, they allow recursive subagents to I believe its 5 levels deep.
You can make a prompt and tell copilot to dig through a code base, have one sub agent per file, and one Recursive subagent per function, to do some complex codebase wide audit. If you use Opus 4.7 to do this it consumes a grand total of 0.5% of a Pro+ plan.
Thats why this paragraph is here:
> it’s now common for a handful of requests to incur costs that exceed the plan price
I wonder how many of those requests are "necessary" or end up being more correct/efficient than a single agent linearly go through the tasks.
No, like every other provider they're just losing money and hoping this will some day magically become profitable
1. They heavily subsidized their plans vs. paying for API. 2. They allowed me to use the subscription in every tool I wanted. 3. It covered both Anthropic and OpenAI.
I also just saw:
> Claude Code to be removed from Pro Tier? > https://news.ycombinator.com/item?id=47855565
Some Opus models were free on Copilot, and in my country you cannot attach a repo to Gemini, that is limited to their premium offerings.
Which Opus models were free on Copilot?
> if they don't have to.
That's the only reason.
In many enterprises you'd need to be very lucky to get an approval for any service that doesn't come from MS.
It makes enterprise deployments much easier because most orgs already have github enterprise.
I have thought about making a product out of something I'm building and trying to make the cost of my product a percentage on top of whatever I could resell Anthropic or OpenAI (or whatever) tokens for. I get this may be unpopular, maybe I should just stick with BYO-key.