It feels far too early for a protocol that's barely a year old with so much turbulence to be donated into its own foundation under the LF.
Alot of people don't realize this, but the foundations that wrap up to the LF have revenue pipelines that are supported by those foundations events (like Kubecon brings in ALOT of money for the CNCF), courses, certifications, etc. And, by proxy, the projects support those revenue streams for the foundations they're in. The flywheel is _supposed_ to be that companies donate to the foundation, those companies support the projects with engineering resources, they get a booth at the event for marketing, and the LF can ensure the health and well-being of the ecosystem and foundation through technical oversight committees, elections, a service-desk, owning the domains, etc.
I don't see how MCP supports that revenue stream nor does it seem like a good idea at this stage: why get a certification for "Certified MCP Developer" when the protocol is evolving so quickly and we've yet to figure how OAuth is going to work in a sane manner?
Mature projects like Kuberentes becoming the backbone of a foundation, like it did with CNCF, makes alot of sense: it was a relatively proven technology at Google that had alot of practical use cases for the emerging world of "cloud" and containers. MCP, at least for me, has not yet proven it's robustness as a mature and stable project: I'd put it into the "sandbox" category of projects which are still rapidly evolving and proving their value. I would have much preferred for Anthropic and a small strike team of engaged developers to move fast and fix alot of the issues in the protocol vs. it getting donated and slowing to a crawl.
At the same time, the protocol's adoption has been 10x faster than Kubernetes, so if you count by this metric, it actually makes sense to donate it now to let others actors in. For instance, without this Google will never fully commit to MCP.
It really feels to me that MCP is a fad. Tool calling seems like the overwhelming use case, but a dedicated protocol that goes through arbitrary runtimes is massive overkill
I've been involved with a few MCP servers. MCP seems like an API designed specifically for LLMs/AIs to interact with.
Agree that tool calling is the primary use case.
Because of context window limits, a 1:1 mapping of REST API endpoint to MCP tool endpoint is usually the wrong approach. Even though LLMs/agents are very good at figuring out the right API call to make.
So you can build on top of APIs or other business logic to present a higher level workflow.
But many of the same concerns apply to MCP servers as they did to REST APIs, which is why we're seeing an explosion of gateways and other management software for MCP servers.
I don't think it is a fad, as it is gaining traction and I don't see what replaces it for a very real use case: tool calling by agents/LLMs.
I am more interested in how MCP can change human interaction with software.
Practical example: there exists an MCP server for Jira.
Connect that MCP server to e.g. Claude and then you can write prompts like this:
"Produce a release notes document for project XYZ based on the Epics associated to version 1.2.3"
or
"Export to CSV all tickets with worklog related to project XYZ and version 1.2.3. Make sure the CSV includes these columns ....."
Especially the second example totally removes the need for the CSV export functionality in Jira. Now imagine a scenario in which your favourite AI is connected via MCP to different services. You can mix and match information from all of them.
Alibaba for example is making MCP servers for all of its user-facing services (alibaba mail, cloud drive, etc etc)
A chat UI powered by the appropriate MCP servers can provide a lot of value to regular end users and make it possible for people to use their own data easily in ways that earlier would require dedicated software solutions (exports, reports). People could use software for use cases that the original authors didn't even imagine.
How does it remove the need for CSV export? The LLM can make mistakes right? Wouldn’t you want the LLM calling the deterministic csv export tool rather than trying to create a csv on its own?
I have been creating an MCP server over the past week or so. Based on what I have seen first hand, an MCP can give much richer context to the AI engine just by using very verbose descriptions in the functions. When it the AI tool (Claude Desktop, Gemini, etc) connects to the server, it examines the descriptions in each function and gets much better context on how to use the tool. I don't know if an API can do the same. I have been very, very impressed how much Claude can do with a good MCP.
Why replace it at all? Just remove it. I use AI every day and don't use MCP. I've built LLM powered tools that are used daily and don't use MCP. What is the point of this thing in the first place?
It's just a complex abstraction over a fundamentally trivial concept. The only issue it solves is if you want to bring your own tools to an existing chatbot. But I've not had that problem yet.
There’s nothing special about llm tools. They’re really just script invocations. A command runner like just does everything you need, and makes the tools available to humans.
Anthropic wants to ditch MCP and not be on the hook for it in the future -- but lots of enterprises haven't realized its a dumb, vibe coded standard that is missing so much. They need to hand the hot potato off to someone else.
They haven't really. One of their latest blog posts is about how to retrofit the "skills" approach to MCP[0], which makes sense, as the "skills" approach doesn't itself come with solutions for dynamic tool discovery/registration.
Contrary to what a lot of the other comments here are claiming, I don't think that's the mark of death for MCP and Anthropic trying to get rid of it.
From the announcement and keeping up with the RFCs for MCP, it's pretty obvious that a lot of the main players in AI are actively working with MCP and are trying to advance the standard. At some point or another those companies probably (more or less forcefully) approached Anthropic to put MCP under a neutral body, as long-term pouring resources into a standard that your competitor controls is a dumb idea.
I also don't think the Linux Foundation has become the same "donate your project to die" dumping ground that the Apache Software Foundation was for some time (especially for Facebook). There are some implications that come with it like conference-ification and establishing certificates programs, which aren't purely good, but overall most multi-party LF/CNCF projects have been doing fairly well.
> "Since its inception, we’ve been committed to ensuring MCP remains open-source, community-driven and vendor-neutral. Today, we further that commitment by donating MCP to the Linux Foundation."
Interesting move by Anthropic! Seems clever although curious if MCP will succeed long-term or not given this.
If they’re “giving it away” as a public good, much better chance of it succeeding, than attempting to lock such a “protocol” away behind their own platform solely.
It has little to do with financing. In addition to the development cost there is now also a membership fee.
What a donation to the Linux foundation offers is ensuring that the trademarks are owned by a neutral entity, that the code for the SDKs and ownership of the organization is now under a neutral entity. For big corporations these are real concerns and that’s what the LF offers.
It would be a crazy antitrust violation for all of these companies to work together on something closed source - e.g. if Facebook/Google/Microsoft all worked on some software project and then kept it for themselves. By hosting it at a neutral party with membership barriers but no technical barriers (you need to pay to sit on the governing board, but you don't need to pay to use the technology), you can have collaboration without FTC concerns. Makes a ton of sense and really is a great way to keep tech open.
Anthropic is a Public Benefit Corporation.. It's goals are AI "for the long-term benefit of humanity," which seems like it would benefit humans a lot more if it were openly available.
AGENTS.md as a “project” is hilarious to me. Thank you so much OpenAI for “donating” the concept of describing how to interact with software in a markdown file. Cutting edge stuff!
A lot of this stuff seems silly but is important to clear the legal risk. There is so much money involved that parasites everywhere are already drafting patent troll lawsuits. Limiting the attack surface with these types of IP donations is a public service that helps open source projects and standards survive.
That "deep dive" is an apples-to-oranges comparison. MCP is also a "HTTP API" that you so criticize.
You also somehow consistently think LLM making tool calls against an OpenAPI spec would result in hallucination, while tool calls are somehow magically exempt from such.
All of this writing sounds like you picked a conclusion and then tried to justify it.
There's no reason an "Agentic OpenAPI" marked as such in a header wouldn't be just as good as MCP and it would save a ton of engineering effort.
I hope MCP will prosper inside this new structure!
Block donating Goose is a bit more worrisome - it feels like they are throwing it away into the graveyard.
Depends a bit on where your agent runs and how/if you built it.
I'm not arguing if one or the other is better but I think the distinction is the following:
If an agent understands MCP, you can just give it the MCP server: It will get the instructions from there.
Tool-Calling happens at the level of calling an LLM with a prompt. You need to include the tool into the call before that.
So you have two extremes:
- You build your own agent (or LLM-based workflow, depending on what you want to call it) and you know what tools to use at each step and build the tool definitions into your workflow code.
- You have a generic agent (most likely a loop with some built-in-tools) that can also work with MCP and you just give it a list of servers. It will get the definitions at time of execution.
This also gives MCP maintainers/providers the ability/power/(or attack surface) to alter the capabilities without you.
Of course you could also imagine some middle ground solution (TCDCP - tool calling definition context protocol, lol) that serves as a plugin-system more at the tool-calling level.
But I think MCP has some use cases. Depending on your development budget it might make sense to use tool-calling.
I think one general development pattern could be:
- Start with an expensive generic agent that gets MCP access.
- Later (if you're a big company) streamline this into specific tool-calling workflows with probably task-specific fine-tuning to reduce cost and increase control (Later = more knowledge about your use case)
I've rarely seen any non elementary use cases where just giving access to an MCP server just works, often times you need to update prompts to guide agents in system prompts or updated instructions. Unless you are primarily using MCP for remote environments (coding etc or to a persons desktop) the uses of it over normal tool calling doesn't seem to scale with complexity.
Now open source Claude Code. It's silly to have it in this semi-closed obfuscated state, that does absolutely nothing to stop a motivated reverse engineering effort, but does everything to slow down innovation.
Especially in a setting where e.g. Gemini CLI is open source, and Goose seems to be an actually open source project.
I think them controlling Claude Code CLI that tightly is 1) a way to make the limits of the fixed-price subscriptions more manageable to them, somehow 2) lets them experiment with prompts and model interactions slightly ahead of their competition.
I think we need to separate what we do in development vs. what happens in production environments. In development using skills makes a lot of sense. It's fast and efficient, and I'm already in a sandbox. In production (in my case a factory floor) allowing an agent to write and execute code to access data from a 3rd party system is a security nightmare.
It feels far too early for a protocol that's barely a year old with so much turbulence to be donated into its own foundation under the LF.
Alot of people don't realize this, but the foundations that wrap up to the LF have revenue pipelines that are supported by those foundations events (like Kubecon brings in ALOT of money for the CNCF), courses, certifications, etc. And, by proxy, the projects support those revenue streams for the foundations they're in. The flywheel is _supposed_ to be that companies donate to the foundation, those companies support the projects with engineering resources, they get a booth at the event for marketing, and the LF can ensure the health and well-being of the ecosystem and foundation through technical oversight committees, elections, a service-desk, owning the domains, etc.
I don't see how MCP supports that revenue stream nor does it seem like a good idea at this stage: why get a certification for "Certified MCP Developer" when the protocol is evolving so quickly and we've yet to figure how OAuth is going to work in a sane manner?
Mature projects like Kuberentes becoming the backbone of a foundation, like it did with CNCF, makes alot of sense: it was a relatively proven technology at Google that had alot of practical use cases for the emerging world of "cloud" and containers. MCP, at least for me, has not yet proven it's robustness as a mature and stable project: I'd put it into the "sandbox" category of projects which are still rapidly evolving and proving their value. I would have much preferred for Anthropic and a small strike team of engaged developers to move fast and fix alot of the issues in the protocol vs. it getting donated and slowing to a crawl.
At the same time, the protocol's adoption has been 10x faster than Kubernetes, so if you count by this metric, it actually makes sense to donate it now to let others actors in. For instance, without this Google will never fully commit to MCP.
comparing kubernetes to what amounts to a subdirectory of shell scripts and their man pages is... brave?
16 replies →
Also IIRC, K8s was perhaps less than 2 years old before it was accepted into the CNCF.
2 replies →
So what of G don't commit? If MCP is so good, it can stand w/o them.
I don't see a future in MCP; this is grandstanding at at it's finest.
This is a land grab and not much else.
Isn't MCP publicly older than Kubernetes when it was donated to CNCF?
It really feels to me that MCP is a fad. Tool calling seems like the overwhelming use case, but a dedicated protocol that goes through arbitrary runtimes is massive overkill
I've been involved with a few MCP servers. MCP seems like an API designed specifically for LLMs/AIs to interact with.
Agree that tool calling is the primary use case.
Because of context window limits, a 1:1 mapping of REST API endpoint to MCP tool endpoint is usually the wrong approach. Even though LLMs/agents are very good at figuring out the right API call to make.
So you can build on top of APIs or other business logic to present a higher level workflow.
But many of the same concerns apply to MCP servers as they did to REST APIs, which is why we're seeing an explosion of gateways and other management software for MCP servers.
I don't think it is a fad, as it is gaining traction and I don't see what replaces it for a very real use case: tool calling by agents/LLMs.
> MCP seems like an API designed specifically for LLMs/AIs to interact with
I guess I'm confused now, I thought that what it explicitly is.
I am more interested in how MCP can change human interaction with software.
Practical example: there exists an MCP server for Jira. Connect that MCP server to e.g. Claude and then you can write prompts like this:
"Produce a release notes document for project XYZ based on the Epics associated to version 1.2.3"
or
"Export to CSV all tickets with worklog related to project XYZ and version 1.2.3. Make sure the CSV includes these columns ....."
Especially the second example totally removes the need for the CSV export functionality in Jira. Now imagine a scenario in which your favourite AI is connected via MCP to different services. You can mix and match information from all of them.
Alibaba for example is making MCP servers for all of its user-facing services (alibaba mail, cloud drive, etc etc)
A chat UI powered by the appropriate MCP servers can provide a lot of value to regular end users and make it possible for people to use their own data easily in ways that earlier would require dedicated software solutions (exports, reports). People could use software for use cases that the original authors didn't even imagine.
I bet it would work the same with REST API and any kind of specs, be it OpenAPI or even text files. From my humble experience.
3 replies →
How does it remove the need for CSV export? The LLM can make mistakes right? Wouldn’t you want the LLM calling the deterministic csv export tool rather than trying to create a csv on its own?
1 reply →
I'm kind of in the same boat, I'm probably missing something big, this seems like a lot of work to serve a json file with a url.
I have been creating an MCP server over the past week or so. Based on what I have seen first hand, an MCP can give much richer context to the AI engine just by using very verbose descriptions in the functions. When it the AI tool (Claude Desktop, Gemini, etc) connects to the server, it examines the descriptions in each function and gets much better context on how to use the tool. I don't know if an API can do the same. I have been very, very impressed how much Claude can do with a good MCP.
Can you not just use verbose descriptions in your swagger document?
MCP is a universal API - a lot of web services are implementing it, this is the value it brings.
Now there are CLI tools which can invoke MCP endpoints, since agents in general fare better with CLI tools.
But like, it's just openAPI with an endpoint for getting the schema, like how is that more universal than openAPI?
5 replies →
What sort of structure would you propose to replace it?
What bodies or demographics could be influential enough to carry your proposal to standardization?
Not busting your balls - this is what it takes.
Why replace it at all? Just remove it. I use AI every day and don't use MCP. I've built LLM powered tools that are used daily and don't use MCP. What is the point of this thing in the first place?
It's just a complex abstraction over a fundamentally trivial concept. The only issue it solves is if you want to bring your own tools to an existing chatbot. But I've not had that problem yet.
29 replies →
There’s nothing special about llm tools. They’re really just script invocations. A command runner like just does everything you need, and makes the tools available to humans.
I wrote a bit on the topic here: https://tombedor.dev/make-it-easy-for-humans/
Dynamic code generation for calling APIs, not sure what is a fancy term for this approach.
10 replies →
You don't need to replace it. Just please stop using it.
If for nothing else than pure human empathy.
Anthropic wants to ditch MCP and not be on the hook for it in the future -- but lots of enterprises haven't realized its a dumb, vibe coded standard that is missing so much. They need to hand the hot potato off to someone else.
Even Anthropic walked back on it recently wihh the programmatic tool calling
They haven't really. One of their latest blog posts is about how to retrofit the "skills" approach to MCP[0], which makes sense, as the "skills" approach doesn't itself come with solutions for dynamic tool discovery/registration.
[0]: https://www.anthropic.com/engineering/advanced-tool-use
1 reply →
Contrary to what a lot of the other comments here are claiming, I don't think that's the mark of death for MCP and Anthropic trying to get rid of it.
From the announcement and keeping up with the RFCs for MCP, it's pretty obvious that a lot of the main players in AI are actively working with MCP and are trying to advance the standard. At some point or another those companies probably (more or less forcefully) approached Anthropic to put MCP under a neutral body, as long-term pouring resources into a standard that your competitor controls is a dumb idea.
I also don't think the Linux Foundation has become the same "donate your project to die" dumping ground that the Apache Software Foundation was for some time (especially for Facebook). There are some implications that come with it like conference-ification and establishing certificates programs, which aren't purely good, but overall most multi-party LF/CNCF projects have been doing fairly well.
Interestingly, Google already donated its own AgentToAgent (A2A) protocol to the Linux donation way earlier this year.
Wow i have never even heard of that one and i feel i have been following the topic quite closely
> "Since its inception, we’ve been committed to ensuring MCP remains open-source, community-driven and vendor-neutral. Today, we further that commitment by donating MCP to the Linux Foundation."
Interesting move by Anthropic! Seems clever although curious if MCP will succeed long-term or not given this.
"Since it's inception"
so for like a year?
Will the Tesla-style connector succeed long-term?
If they’re “giving it away” as a public good, much better chance of it succeeding, than attempting to lock such a “protocol” away behind their own platform solely.
MCP is just a protocol - how could it not remain open source? It's literally just JSON-RPC. Implementations are what are open source or not.
The HDMI forum would like a word/to sue your pants off.
Ref: https://arstechnica.com/gaming/2025/12/why-wont-steam-machin...
[flagged]
Is the Linux Foundation basically a dumping ground for projects that corporations no longer want to finance but still keep control over?
Facebook still has de facto control over PyTorch.
It has little to do with financing. In addition to the development cost there is now also a membership fee.
What a donation to the Linux foundation offers is ensuring that the trademarks are owned by a neutral entity, that the code for the SDKs and ownership of the organization is now under a neutral entity. For big corporations these are real concerns and that’s what the LF offers.
It would be a crazy antitrust violation for all of these companies to work together on something closed source - e.g. if Facebook/Google/Microsoft all worked on some software project and then kept it for themselves. By hosting it at a neutral party with membership barriers but no technical barriers (you need to pay to sit on the governing board, but you don't need to pay to use the technology), you can have collaboration without FTC concerns. Makes a ton of sense and really is a great way to keep tech open.
say MCP is a dead-end without saying it's dead.
I really like Claude models, but I abhor the management at Anthropic. Kinda like Apple.
They never open sourced any models, not even once.
Is there a reason they should? I mean they’re a for profit company.
Anthropic is a Public Benefit Corporation.. It's goals are AI "for the long-term benefit of humanity," which seems like it would benefit humans a lot more if it were openly available.
https://www.anthropic.com/company
6 replies →
I'm pretty sure there are more MCP servers than there are users of MCP servers.
Foundation release: https://aaif.io/press/linux-foundation-announces-the-formati...
MCP is overly complicated. I'd rather use something like https://utcp.io/
Kinda weird/unexpected to see goose by block as a founding partner. I am aware of them but did not realize their importance when it comes to MCP.
I think the focus should be on more and better APIs, not MCP servers.
Agreed, I too wish for a better horse.
OpenAI post: https://news.ycombinator.com/item?id=46207383)
AGENTS.md as a “project” is hilarious to me. Thank you so much OpenAI for “donating” the concept of describing how to interact with software in a markdown file. Cutting edge stuff!
A lot of this stuff seems silly but is important to clear the legal risk. There is so much money involved that parasites everywhere are already drafting patent troll lawsuits. Limiting the attack surface with these types of IP donations is a public service that helps open source projects and standards survive.
There appears to be a lot of confusion in the comments around what the MCP is and how it is different API.
I've done a deep dive here before.
Hope this clears it up: https://glama.ai/blog/2025-06-06-mcp-vs-api
That "deep dive" is an apples-to-oranges comparison. MCP is also a "HTTP API" that you so criticize.
You also somehow consistently think LLM making tool calls against an OpenAPI spec would result in hallucination, while tool calls are somehow magically exempt from such.
All of this writing sounds like you picked a conclusion and then tried to justify it.
There's no reason an "Agentic OpenAPI" marked as such in a header wouldn't be just as good as MCP and it would save a ton of engineering effort.
Thanks.
The video link seems to be missing in the section: Bonus: MCP vs API video
MCP's post: http://blog.modelcontextprotocol.io/posts/2025-12-09-mcp-joi...
This sounds more like anthropic giving up on mcp than it does a good faith donation to open source.
Anthropic will move onto bigger projects and other teams/companies will be stuck with sunk cost fallacy to try and get mcp to work for them.
Good luck to everyone.
I hope MCP will prosper inside this new structure! Block donating Goose is a bit more worrisome - it feels like they are throwing it away into the graveyard.
i thought skills are the new context resolver
I can specify and use tools with an LLM without MCP, so why do I need MCP?
Depends a bit on where your agent runs and how/if you built it.
I'm not arguing if one or the other is better but I think the distinction is the following:
If an agent understands MCP, you can just give it the MCP server: It will get the instructions from there.
Tool-Calling happens at the level of calling an LLM with a prompt. You need to include the tool into the call before that.
So you have two extremes:
- You build your own agent (or LLM-based workflow, depending on what you want to call it) and you know what tools to use at each step and build the tool definitions into your workflow code.
- You have a generic agent (most likely a loop with some built-in-tools) that can also work with MCP and you just give it a list of servers. It will get the definitions at time of execution.
This also gives MCP maintainers/providers the ability/power/(or attack surface) to alter the capabilities without you.
Of course you could also imagine some middle ground solution (TCDCP - tool calling definition context protocol, lol) that serves as a plugin-system more at the tool-calling level.
But I think MCP has some use cases. Depending on your development budget it might make sense to use tool-calling.
I think one general development pattern could be:
- Start with an expensive generic agent that gets MCP access.
- Later (if you're a big company) streamline this into specific tool-calling workflows with probably task-specific fine-tuning to reduce cost and increase control (Later = more knowledge about your use case)
I've rarely seen any non elementary use cases where just giving access to an MCP server just works, often times you need to update prompts to guide agents in system prompts or updated instructions. Unless you are primarily using MCP for remote environments (coding etc or to a persons desktop) the uses of it over normal tool calling doesn't seem to scale with complexity.
Yeah. This is the open source equivalent to regulatory capture.
Now open source Claude Code. It's silly to have it in this semi-closed obfuscated state, that does absolutely nothing to stop a motivated reverse engineering effort, but does everything to slow down innovation.
Especially in a setting where e.g. Gemini CLI is open source, and Goose seems to be an actually open source project.
I think them controlling Claude Code CLI that tightly is 1) a way to make the limits of the fixed-price subscriptions more manageable to them, somehow 2) lets them experiment with prompts and model interactions slightly ahead of their competition.
aka. "It's not our problem now."
"Look ma, I'm a big boy project now"
Donate?! Pshawh………more like vibe manage it yourself lol
Leaving aside the mediocre reputation of the Linux Foundation, is it true that everyone moving away from MCP and towards Claude Skills at this point?
I think we need to separate what we do in development vs. what happens in production environments. In development using skills makes a lot of sense. It's fast and efficient, and I'm already in a sandbox. In production (in my case a factory floor) allowing an agent to write and execute code to access data from a 3rd party system is a security nightmare.
Didn't see any company moving from MCP to Skills in the past 2 months. Skills is great but it's definitely not an MCP competitor
No? MCP works everywhere
Mediocre?
gg anthropic
[dead]