What if you don't need MCP at all?

7 hours ago (mariozechner.at)

So I don't disagree with any of the criticisms of MCPs but no one here has mentioned why they are useful, and I'm not sure that everyone is aware that MCP is actually just a wrapper over existing cli/API:

1. Claude Code is aware of what MCPs it has access to at all times.

2. Adding an MCP is like adding to the agent's actuators/vocabulary/tools because unlike cli tools or APIs you don't have to constantly remind it what MCPs it has available and "hey you have access to X" and "hey make an MCP for X" take the same level of effort on the part of the user.

3. This effect is _significantly_ stronger than putting info about available API/cli into CLAUDE.md.

4. You can almost trivially create an MCP that does X by asking the agent to create an MCP that does X. This saves you from having to constantly remind an agent it can do X.

NOTE: I cannot stress enough that this property of MCPs is COMPLETELY ORTHOGONAL to the nutty way they are implemented, and I am IN NO WAY defending the implementation. But currently we are talking past the primary value prop.

I would personally prefer some other method but having a way to make agents extensible is extremely useful.

EXAMPLE:

"Make a bash script that does X."

<test manually to make sure it works>

"Now make an MCP called Xtool that uses X."

<restart claude>

<claude is now aware it can do Xtool>

  • >This effect is _significantly_ stronger than putting info about available API/cli into CLAUDE.md.

    No it's not.

    Honestly this conversation is extremely weird to me because somehow people are gravely misunderstanding what MCP even purports to do, let alone what it actually CAN do in the most ideal situation.

    It is a protocol and while the merits of that protocol is certainly under active discussion it's irrelevant because you keep adding qualities about the protocol that it cannot deliver on.

    Just same facts to help steer this conversation correctly, and maybe help your understanding on what is actually going:

    * All LLM's/major models have function & tool calling built in.

    * Your LLMs/models do not have any knowledge on MCP, nor have they been trained on it.

    * MCP exists, at least the claim, is to help standardize the LIFECYCLE of the tool call.

    * MCP does not augment or enhance the ability of LLM's in any form.

    * MCP does not allow you to extend agents. That's an implicit feature.

    * If you have access to "X" (using your example), you don't need anything that obeys the MCP standard.

    MCP at best is for developers and tool developers. Your model does not need an MCP server or client or anything else MCP related to do what is already been trained to do.

    >I would personally prefer some other method but having a way to make agents extensible is extremely useful.

    They already are. MCP does not help with this.

  • > 3. This effect is _significantly_ stronger than putting info about available API/cli into CLAUDE.md.

    What? Why?

    > unlike cli tools or APIs you don't have to constantly remind it what MCPs it has available

    I think I'm missing something, because I thought this is what MCP does, literally. It just injects the instructions about what tools it has and how to use them into the context window. With MCP it just does it for you rather than you having to add a bit to your CLAUDE.md. What am I misunderstanding?

  • MCP is simply a standardized RPC protocol for LLMs.

    That's it.

    The value is in all the usual features of standardization -- plug-and-play, observability, pass-through modifications, etc.

  • Also not disagreeing with your argument. Just want to point out that you can achieve the same by putting minimal info about your CLI tools in your global or project specific CLAUDE.md.

    The only downside here is that it's more work than `claude mcp add x -- npx x@latest`. But you get composability in return, as well as the intermediate tool outputs not having to pass through the model's context.

  • 1.) Awareness doesn’t mean they will use it. And in practice they often don’t use them.

    2.) “ unlike cli tools or APIs you don't have to constantly remind it what MCPs it has available” - this doesn’t match my experience. In fact, bash commands are substantially more discoverable.

    3.) Again, this doesn’t match my experience and the major providers recommend including available MCP tools in system prompts/CLAUDE.md/whatever.

    4.) Can’t speak to this as it’s not part of my workflow for the previous reasons.

    The only useful MCP for me is Playwright for front end work.

    • Chrome Devtools is similarly an extremely high value MCP for me.

      I would agree that if you don't find they add discoverability then MCPs would have no value for you and be worse than cli tools. It sounds like we have had very opposite experiences here.

      1 reply →

MCP was a really shitty attempt at building a plugin framework that was vague enough to lure people into and then allow other companies to build plugin platforms to take care of the MCP non-sense.

"What is MCP, what does it bring to the table? Who knows. What does it do? The LLM stuff! Pay us $10 a month thanks!"

LLM's have function / tool calling built into them. No major models have any direct knowledge of MCP.

Not only do you not need MCP, but you should actively avoid using it.

Stick with tried and proven API standards that are actually observable and secure and let your models/agents directly interact with those API endpoints.

  • > No major models have any direct knowledge of MCP.

    Anthropic and OpenAI both support MCP, as does the OpenAI Agents SDK.

    (If you mean the LLM itself, it is "known" at least as much as any other protocol.)

  • probably easier to just tell people: You want MCP? Add a "description" field to your rest API that describes how to call it.

    That's all it's doing. Just plain ole context pollution. World could be better served by continuing to build out the APIs that exist.

    • Also, keep your api small as all the tool call, DTOs and user messages (e.g. workflow recipes) add up to big context windows and accuracy confusion, at least in the latest models. I hope that gets resolved.

    • yesss, and OpenAI tried this first when they were going to do a “GPT store”. But REST APIs tend to be complicated because they’re supporting apps. MCP, when it works, is very simple functions

      in practice it seems like command line tools work better than either of those approaches

      1 reply →

    • > Add a "description" field to your rest API that describes how to call it.

      Isn't that swagger\grpc etc?

  • Yeah there's no there there when it comes to MCP. It's crazy to me that the world bought into the idea when the "spec" literally boils down to "have your server give the LLM some json". Just illustrates how powerful it is to attach names to things, especially in a hypestorm in which everyone is already frothing at the mouth and reason is hard to come by. Give people some word they can utter to help them sound like they're on the "bleeding edge" and they'll buy into it even if it's totally pointless.

So far I have seen two genuinely good arguments for the use of MCPs:

* They can encapsulate (API) credentials, keeping those out of reach of the model,

* Contrary to APIs, they can change their interface whenever they want and with little consequences.

  • > * Contrary to APIs, they can change their interface whenever they want and with little consequences.

    I already made this argument before, but that's not entirely right. I understand that this is how everybody is doing it right now, but that in itself cause issues for more advanced harnesses. I have one that exposes MCP tools as function calls in code, and it encourages the agent to materialize composed MCP calls into scripts on the file system.

    If the MCP server decides to change the tools, those scripts break. That is is also similar issue for stuff like Vercel is advocating for [1].

    [1]: https://vercel.com/blog/generate-static-ai-sdk-tools-from-mc...

  • What's the alternative design where the model has access to API credentials?

    • > What's the alternative design where the model has access to API credentials?

      All sorts of ways this can happen but it usually boils down to leaving them on disk or in an environment variable in the repo/dir(s) where the agent is operating in.

I've noticed with AI people seem to want to latch onto frameworks. I think this happens because the field is changing quite quickly and it's difficult to navigate without being in it - offloading decisions to a framework is an attempt to constrain your complexity.

This occurred with langchain and now seems to be occurring with mcp. Neither of those really solved the actual problems that are difficult with deploying AI - creativity, context, manual testing, tool design etc. The owners of these frameworks are incentivized to drag people into them to attain some sort of vendor lock-in.

At my company we started building our tool based data scientist agent before MCP came out and it's working great.

https://www.truestate.io/

  • MCP is something that's filled with buzzwords and seems like something created solely so that you can be "sold" something. From what I actually gathered, it's basically somehow four things rolled into one:

    * A communication protocol, json-rpc esque except it can be done over stdio or via HTTP

    * A discovery protocol, like Swagger, to document the "tools" that an endpoint exposes and how it should be used

    * A tool calling convention, the specific sequence of tokens the LLM needs to output for something to be recognized as a tool call

    * A thin glue layer orchestrating all of the above: injecting the list of available tools into the LLM context, parsing LLM output to detect tool calls and invoke them with appropriate args, and inject results back into LLM context

    • > * A thin glue layer orchestrating all of the above: injecting the list of available tools into the LLM context, parsing LLM output to detect tool calls and invoke them with appropriate args, and inject results back into LLM context

      Yeah llm rules. You think there must be something more to it. There's not.

  • Frameworks are also a way to capture a part of the ecosystem and control it. Look at Vercel.

  • AI is in it's "pre react" state if you were to compare this with FE software development of 2008-2015

    • I think that's being generous, we haven't even had the Rails moment with AI yet. Shit, I'm not sure we've had the jQuery moment yet. I think we're still in the Perl+CGI phase.

  • AI has lots of this 'fake till you make it' vibe from startups. And unfortunately it wins - because these hustler guys get a lot of money from VCs before their tools are vetted by the developers.

  • Yeah, and just like the web space there will be a plethora of different frameworks out there all solving the same problems in their own slightly different, uniquely crappy ways and an entire pointless industry built around ceaselessly creating and rehashing and debating this needlessly bloated ecosystem of competing solutions will emerge and employ many "ai engineers".

    Outside of a few notable exceptions, the software industry has become such a joke.

  • >TrueState unburdens analytics teams from the repetitive analysis and accelerates the delivery of high-impact solutions.

    Ehh, that's pretty vague. How does it work?

    >Request demo

    Oh. Well how much is it?

    >Request pricing

    Oh never mind

    • It’s like the email scams that filter people out with bad spelling and obvious red flags. If someone makes it through those hurdles they’re probably a good prospect. You weren’t really thinking of buying it, were you?

Yeah, I'm still confused as to why so many people in "AI engineering" seem to think that MCPs are the key to everything.

They are great if you have a UI that you want and it needs a plugin system, obviously.

But the benefits become much more marginal for a developer of enterprise AI systems with predefined tool selections. They are actually getting overused in this space, if anything, sometimes with security as a primary casualty.

Mario has some fantastic content, and has really shaped how I think about my interface to coding tools. I use a modified version of his LLM-as-crappy-state-machine model (https://github.com/badlogic/claude-commands) for nearly all my coding work now. It seems pretty clear these days that progressive discovery is the way forward (e.g. skills), and using CLI tools rather than MCP really facilitates that. I've gone pretty far down the road of writing complex LLM tooling, and the more I do that the more the simplicity and composability is appealing. He has a coding agent designed along the same principles, which I'm planning to try out (https://github.com/badlogic/pi-mono/tree/main/packages/codin...).

Hey we actually just released rtrvr.ai, our AI Web Agent Chrome Extension, as a Remote MCP Server that obviates lot of the setup you needed to do. We had the same intuition that the easiest way to scrape is through your own browser and so we expose dedicated MCP tools to do actions, scrape pages, and execute arbitrary code in Chrome's built in sandbox.

We give a copy/pasteable MCP url that you can use with your favorite agent/chatbot/site and give those providers browser context and allow them to do browser actions.

So compared to Playwright MCP and others that require you to run npx and can only be connected to local clients, with ours you just paste a url and can use with any client.

Checkout our recent posts: https://news.ycombinator.com/item?id=45898043 https://www.youtube.com/watch?v=B4BTWNTuE-s

I like MCP for _remote_ services such as Linear, Notion, or Sentry. I authenticate once and Claude has the relevant access to access the remote data. Same goes for my team by committing the config.

Can I “just call the API”? Yeah, but that takes extra work, and my goal is to reduce extra work.

  • This is the key. MCP encapsulates tools, auth, instructions.

    We always need something for that - and it needs to work for non tech users too

select * from protocols # ipc, tcp, http, websockets, ...?

MCP and A2A are JSONRPC schemas people follow to build abstraction around their tools. Agents can use MCP to discover tools, invoke and more. OpenAPI Schemas are good alternatives to MCP servers today. In comparison to OpenAPI Schemas, MCP servers are pretty new.

my fav protocol is TCP, which I am a proud user of nc localhost 9999.

but not everyone have same taste of building software.

https://github.com/cagataycali/devduck

I can see where Mario is coming from, but IMO MCP still has a place because it 1) solves authentication+discoverability, 2) doesn't require code execution.

MCP shines when you want to add external functionality to an agent quickly, and in situations where it's not practical to let an agent go wild with code execution and network access.

Feels like we're in the "backlash to the early hype" part of the hype cycle. MCP is one way to give agents access to tools; it's OK that it doesn't work for every possible use case.

  • Oh, I didn't intend this to come across as MCP being useless. I've written this from the perspective of someone who uses LLMs mostly for coding/computer tasks, where I found MCP to be less than ideal for my use cases.

    I actually think MCP can be a multiplier for non-technical users, where it not for some nits like being a bit too technical and the various security footguns many MCP servers hand you.

I still think it's better to have MCP, after all, it's unrealistic for any company to integrate all functions into one

You don’t need formal tools. You only need a bash tool that can run shell scripts and cli tools!

Overwhelmed by Sentry errors recently I remembered sentry-cli. I asked the agent to use it to query for unresolved Sentry errors and make a plan that addresses all of them at once. Zeroed out my Sentry inbox in one Claude Code plan. All up it took about an hour.

The agent was capable of sussing out sentry-cli, even running it with --help to understand how to use it.

The same goes for gh, the github cli tool.

So rather than MCPs or function style tools, I highly recommend building custom cli tools (ie. shell scripts), and adding a 10-20 word description of each one in your initial prompt. Add --help capabilities for your agent to use if it gets confused or curious.

  • To add to this, agents view the world through sort of a "choose your own adventure" lens. You want your help output to basically "prompt" the agent, and provide it a curated set of options for next steps (ideally between 4-8 choices). If your CLI has more options than that, you want to break as much as possible into commands. The goal is to create an "decision tree" for the agent to follow based on CLI output.

Oh you're misunderstanding MCP here.

MCP was created so llm companies can have a plugin system. So instead of them being the API provider, they can become the platform that we build apps/plugins for, and they become the user interface to end consumers.

  • what's the difference between that and those providers exposing an api?

    • MCP defines the API so vendors of LLM tools like cursor, claude code, codex etc don't all make their own bespoke, custom ways to call tools.

      The main issue is the disagreement on how to declare the MCP tool exists. Cursor, vscode, claude all use basically the same mcp.json file, but then codex uses `config.toml`. There's very little uniformity in project-specific MCP tools as well, they tend to be defined globally.

      3 replies →

LLMs were trained on the how we use text interfaces. You don't need to adopt command line for an LLM to use. You don't really need RAG - just connect the LLM to the shell tools we are using for search. And ultimately it would be much more useful if the language servers had good cli commands and LLMs were using them instead of going via MCP or some other internal path - ripgrep is already showing how much more usable it is this way.

IMO MCP isn't totally dead, but its role has shrunk. Quoting from my post [1]:

"Instead of a bloated API, an MCP should be a simple, secure gateway... MCP’s job isn’t to abstract reality for the agent; its job is to manage the auth, networking, and security boundaries and then get out of the way."

You still need some standard to hook up data to agents esp when the agents are not running on your local dev machine. I don't think e.g. REST/etc are nearly specific enough to do this without a more constrained standard for requests.

[1] https://blog.sshh.io/p/how-i-use-every-claude-code-feature

There’s too much rage baiting on the internet now; the headlines that take the extreme position get reshared, while the truth is more in the middle.

This is incredibly simple and neat! Love it!

Will have a think about how this can extended to other types of uses.

I have personally been trying to replace all tools/MCPs with a single “write code” tool which is a bit harder to get to work reliably in large projects.

MCP is how you wrap/distribute/compose things related to tool-use. Tool-use is how you insist on an IO schema that LLMs must conform to. Schemas are how you combat hallucination, and how you can use AI in structured ways for things that it wasn't explicitly trained on. And this is really just scratching the surface of what MCP is for.

You can throw all that away by rejecting MCP completely or by boiling tool-use down to just generating and running unstructured shell commands. But setting aside security issues or why you'd want to embrace more opportunities for hallucination instead of less.. shelling out for everything is perfect faith in the model's ability to generate correct bash for an infinite space of CLI surfaces. You've lost the ability to ever pivot to smaller/cheaper/local models, and now you're more addicted to external vendors/SOTA models.

Consider the following workflow with a large CLI surface that's a candidate for a dedicated LLM tool, maybe ffmpeg. Convert the man page to a JSON schema. Convert the JSON schema to a tool. Add the tool to a MCP server, alongside similar wizards for imagemagick/blender. The first steps can use SOTA models if necessary, but the later steps can all feasibly work for free, as a stand-alone app that has no cloud footprint and no subscription fee. This still works if ffmpeg/blender/imagemagick were private custom tools instead of well-known tools that are decades old. You can test the tools in offline isolation too. And since things like fastmcp support server composition you can push and pop that particular stack of wizards in or out of LLM capabilities.

Good luck getting real composition with markdown files and tweaking prompts for tone by adding a "Please" preamble. Good luck engineering real systems with vague beliefs about magic, no concrete specifications for any part of any step, constantly changing external dependencies, and perfect faith in vendors.

Yeah, "MCP" felt like BS from jump. Basically it's the problem that will always be a problem, namely "AI stuff is non-deterministic."

If there was some certainty MCP could add to this equation that would perhaps be theoretically nice, but otherwise it's just .. parsing, a perhaps not "solved" problem, but one for which there's already ample solutions.

  • Why are they nondeterministic? You can use a fixed seed or temperature=0.

    • The whole point of "agentic AI" is that you don't have to rigorously test every potential interaction, which means that even a temperature zero model may behave unexpectedly, which is bad for security.

Moderne Ai agent tool have have a setting where you can trimm down the numbers of tools from an MCP server. Usefull to avoid overwhelming the LLM with 80 tools description when you only need 1

  • I don't find that to help much at all, particularly because some tools really only make sense with a bunch of other tools and then your context is already polluted. It's surprisingly hard to do this right, unless you have a single tool MCP (eg: a code/eval based tool, or an inference based tool).

    • Don't you have a post about writing Python instead of using MCP? I can't see how MCP is more efficient than giving the LLM a bunch of function signatures and allow it to call them, but maybe I'm not familiar enough with MCP.

      2 replies →

  • Remote MCP with API key which has claims works well to reduce the tool count to only that of what you need.

I have a feeling that MCP is going the way GraphQL is going ...

  • As abstruse as GraphQL is, it does have legitimate use cases. I say this as someone who avoided it for a long time for aesthetic reasons. MCP on the other hand is all hype.

MCP is yet another waste of effort trying to recreate what we had with REST over 20 years ago.

Yes, APIs should be self-documenting. Yes, response data should follow defined schemas that are understandable without deep knowledge of the backend. No, you don't need MCP for this.

I wish Google would have realized, or acknowledged, that XML and proper REST APIs solve both of these use cases rather than killing off XSLT support and presumably helping to coerce the other browsers and WhatWG to do the same.

My vote is “don't need MCP” given that

a) I have agents in production for enterprise companies that did what they were supposed to (automate a human process, alter the point of the whole division, lower cost, increase revenue)

b) the whole industry seems to be failing at doing a) to the point they think its all hype

c) the whole industry thinks they need MCP servers and I don’t

You don't need MCP.

You need Claude Skills.

  • Claude Skills are just good documentation wrapped into Anthropic's API in a proprietary way that's designed to foster lock-in.

  • Actually you just need a prompt and some tools

    • Skills are basically just a prompt and (optionally) some tools, only with a preamble that means they are selectively brought into context only as needed.

I agree with what Mario says overall and I can be honest, I don't really use MCP I don't think - at least not what it's intended for (some sort of plugin system for extensbile capabilities). I use it for an orchestration layer, and for that it's great.

When MCP itself works it's great. For example, we organize units of work into "detective cases" for framing and the corresponding tool is wanderland__get_detective_case. Spawn a Claude Code session, speak "get up to speed on our current case" and we have instant context loading in a sub-agent session, useful when the Jira ticket requires input from another repository (or two). They're all writing back through the same wanderland__add_detective_case_note call and that routes everything through the central attractor to the active case.

Most of the time, the case we're working on was just a "read DVOPS-XXXXX in Jira and create a case for me". That's wanderland_get_jira_ticket (a thin wrapper on the jira cli) and wanderland__create_detecive_case in turn.

The secret to mcp is that it breaks a lot, or they forget about it because their context is polluted (or you broke it because you're working on it). But it's just a thin wrapper over your API anyways, so just ensure you've got a good /docs endpoint hanging off that and a built in fetch (or typically a fallback to bash with curl -s for some reason) and you're back up and running until you can offload that context. At least you should be if you've designed it properly. Throw in a CLI wrapper for your API as well, they love those :) Three interfaces to the same tool.

The MCP just offers the lowest friction, the context on how to use it injected automatically at a level low enough to pick it up in those natural language emissions and map it to the appropriate calls.

And, if you're building your own stack anyways, you can do naughty things to the protocol like like inject reminders from your agenda with weighted probabilities (gets more nagging the more you're overdue) or inject user-guides from the computational markdown graph the platform is built on when their tools are first used (we call that the helpful, yet somewhat forceful barrista pattern, no choice but to accept the paper and a summary of the morning news with your coffee in the morning). Or restrict the tools available based on previous responses (the more frustrated you get, the more we're likely to suggest you read a book Claude). Or when your knowledge graph is spatially oriented, you can do fun things like make sure we go east or west once in a while (variations on related items) rather than purely north south (into and out of specific knowledge veriticals) with simple vector math.

MCP isn't strictly necessary for all of this, that could be (and in some cases rightly is) implemented at the API layer, but the MCP layer does give us a simple place to reason about agentic behaviour and keeps it away from the tools itself. In other words, modeling error rates as frustration and restricting tool use / injecting help guides make sense in one layer and injecting reminders into a response from the same system that's processing the underlying tool calls makes sense in another, if the protocol you've designed for such things allows for such two way context passing. Absent any other layer in the current stack (and no real desire to implement the agentic loop on my own at the moment), the MCP protocol seems perfectly suited for these types of shennanigans - view it like something like Apigee or (...) API Gateway, adding a bit of intelligence and remixability on top of your tools for better UX with your agents

For Claude Code this approach looks easy. But if you use Cursor you need other approach as it doesn't have a format for tools.

MCP is convenient and the context pollution issue is easily solved by running them in subagents. The real miss here was not doing that from the start.

Well, stdio security issues when not sandboxed are another huge miss, although that's a bit of a derail.