Comment by noodletheworld
6 days ago
> MCP is the absolute best and most effective way to integrate external tools into your agent sessions
Nope.
The best way to interact with an external service is an api.
It was the best way before, and its the best way now.
MCP doesn't scale and it has a bloated unnecessarily complicated spec.
Some MCP servers are good; but in general a new bad way of interacting with external services, is not the best way of doing it, and the assertion that it is in general, best, is what I refer to as “works for me” coolaid.
…because it probably does work well for you.
…because you are using a few, good, MCP servers.
However, that doesn't scale, for all the reasons listed by the many detractors of MCP.
Its not that it cant be used effectively, it is that in general it is a solution that has been incompetently slapped on by many providers who dont appreciate how to do it well and even then, it scales badly.
It is a bad solution for a solved problem.
Agents have made the problem MCP was solving obsolete.
You haven’t actually done that have you. If you did, you would immediately understand the problems MCP solves on top of just trying to use an API directly:
- easy tool calling for the LLM rather than having to figure out how to call the API based on docs only. - authorization can be handled automatically by MCP clients. How are you going to give a token to your LLM otherwise?? And if you do, how do you ensure it does not leak the token? With MCP the token is only usable by the MCP client and the LLM does not need to see it. - lots more things MCP lets you do, like bundle resources and let the server request off band input from users which the LLM should not see.
> easy tool calling for the LLM rather than having to figure out how to call the API based on docs only
I think the best way to run an agent workflow with custom tools is to use a harness that allows you to just, like, write custom tools. Anthropic expects you to use the Agent SDK with its “in-process MCP server” if you want to register custom tools, which sounds like a huge waste of resources, particularly in workflows involving swarms of agents. This is abstraction for the sake of abstraction (or, rather, market share).
Getting the tool built in the first place is a matter of pointing your agent at the API you’d like to use and just have them write it. It’s an easy one-shot even for small OSS models. And then, you know exactly what that tool does. You don’t have to worry about some update introducing a breaking change in your provider’s MCP service, and you can control every single line of code. Meanwhile, every time you call a tool registered by an MCP server, you’re trusting that it does what it says.
> authorization can be handled automatically by MCP clients. How are you going to give a token to your LLM otherwise??
env vars or a key vault
> And if you do, how do you ensure it does not leak the token?
env vars or a key vault
An authnz aware egress proxy that also puts guard rails on MCP behavior?
Gee, that's starting to sound like a whole "bloated" framework...
Let's say I made a calendar app that stores appointments for you. It's local, installed on your system, and the data is stored in some file in ~/.calendarapp.
Now let's say you want all your Claude Code sessions to use this calendar app so that you can always say something like "ah yes, do I have availability on Saturday for this meeting?" and the AI will look at the schedule to find out.
What's the best way to create this persistent connection to the calendar app? I think it's obviously an MCP server.
In the calendar app I provide a built-in MCP server that gives the following tools to agents: read_calendar, and update_calendar. You open Claude Code and connect to the MCP server, and configure it to connect to the MCP for all sessions - and you're done. You don't have to explain what the calendar app is, when to use it, or how to use it.
Explain to me a better solution.
Why couldn't the calendar app expose in an API the read_calendar and update_calendar functionalities, and have a skill 'use_calendar' that describes how to use the above?
Then, the minimal skill descriptions are always in the model's context, and whenever you ask it to add something to the calendar, it will know to fetch that skill. It feels very similar to the MCP solution to me, but with potentially less bloat and no obligation to deal with MCP? I might be missing something, though.
Why would I do that if the MCP already handles it? The MCP exposes the API with those tools, it explains what the calendar app is and when to use it.
Connected MCP tools are also always in the model's context, and it works for any AI agent that supports MCP, not just Claude Code.
16 replies →
You realize you can just create your own tools and wire them up directly using the Anthropic or OpenAI APIs etc?
It's not a choice between Skills or MCP, you can also just create your own tools, in whatever language you want, and then send in the tool info to the model. The wiring is trivial.
I write all my own tools bespoke in Rust and send them directly to the Anthropic API. So I have tools for reading my email, my calendar, writing and search files etc. It means I can have super fast tools, reduce context bloat, and keep things simple without needing to go into the whole mess of MCP clients and servers.
And btw, I wrote my own MCP client and server from the spec about a year ago, so I know the MCP spec backwards and forwards, it's mostly jank and not needed. Once I got started just writing my own tools from scratch I realised I would never use MCP again.
[dead]