Comment by paulddraper

3 months ago

MCP is simply a standardized RPC protocol for LLMs.

That's it.

The value is in all the usual features of standardization -- plug-and-play, observability, pass-through modifications, etc.

>observability

Which MCP does the opposite of. It hides information.

  • How so? The protocol doesn't obfuscate things. Your agent can easily expose the entire MCP conversation, but generally just exposes the call and response. This is no different than any other method of providing a tool for the LLM to call.

    You have some weird bone to pick with MCP which is making you irrationally unreceptive to any good-faith attempt to help you understand.

    If you want to expose tools to the LLM you have to provide a tool definition to the LLM for each tool and you have to map the LLM's tool calls into the agent executing the tools and returning the results. That's universal for all agent-side tools.

    The whole purpose behind MCP was to provide a low-impedance standard where some set of tools could be plugged into an existing agent with no pre-knowledge and all the needed metadata was provided to facilitate linking the tools to the agent. The initial version was clearly focused on local agents running local tools over stdio. The idea of remote tools was clearly an afterthought if you read the specification.

    If you want your agent to speak OpenAPI, you are *more* than welcome to make it do so. It'll probably be fine if it's a well-specified API. The context issues won't go away, I guarantee you. OpenAPI specs for APIs with lots of endpoint will result in large tool definitions for the LLM, just like they do with MCP.

    A core issue I see with MCP, as someone using it every day, is that most MCP Server developers clearly are missing the point and simply using MCP as a thin translation layer over some existing APIs. The biggest value with MCP is when you realize that an MCP Server should be a *curated* experience for the LLM to interact with and the output should be purposefully designed for the LLM, not just a raw data dump from an API endpoint. Sure, some calls are more like raw data dumps and should have minimal curation, but many other MCP tools should be more like what the OP of this post is doing. The OP is defining a local multi-step workflow where steps feed into other steps and *don't* need LLM mediation. That should be a *single* MCP Server Tool. They could wrap the local bash scripts up into a simple single tool stdio MCP Server and now that tool is easily portable across any agent that speaks MCP, even if the agent doesn't have the ability to directly run local CLI commands.

    Anyway, maybe take a breath and be objective about what MCP is and is not meant to do and disconnect what MCP is from how people are *currently* using (and frequently misusing) MCP.

    • Probably a good read for you to start with: https://raz.sh/blog/2025-05-02_a_critical_look_at_mcp

      There are tons of articles detailing the problems if you are genuinely interested.

      Notice you couldn't technically point to anything to support your statements, but instead had to revert to religious zealotry and apologetics -- which has no place on this forum.

      >be objective about what MCP is and is not meant to do and disconnect what MCP is from how people are currently using (and frequently misusing) MCP.

      Please re-read what you wrote.

      You wrote all of that just to counter your own stated position, because I think at some fundamental level you realize how non-sense it is.

      4 replies →