Comment by hobofan
2 months ago
Most of the value lies in its differentiation to OpenAPI and the conventions it brings.
By providing an MCP endpoint you signify "we made the API self-describing enough to be usable by AI agents". Most existing OpenAPI specs out there don't clear that bar, as endpoint/parameter descriptions are underdocumented and are unusable without supplementary documentation that is external to the OpenAPI spec.
That sounds like one could publish MCPv2 which is simply OpenAPI that sets a single flag to true in the header.
MCP is a lot of duplicate engineering effort for seemingly no gain.
Okay... you are tasked with integrating every rest api exposed from Amazon into vscode chatbot. It's easy to do with rest api so how long will it take you to configure that?
That's again apples to oranges. The AWS API was not made to be LLM-friendly.
The apples to apples comparison would be this:
A:
- Assume that AWS exposes an LLM-oriented OpenAPI spec.
- Take a preexisting OpenAPI client with support for reflection.
- Write the plumbing to go between agent tool calls and OpenAPI calls. Schema from OpenAPI becomes schema for tool calls.
- You use a preexisting OpenAPI client library, AWS can use a preexisting OpenAPI server library.
B:
- Assume that AWS exposes an MCP server.
- Program an MCP client.
- Write the plumbing to go between agent tool calls and MCP calls. Schema from MCP becomes schema for tool calls.
- You had to program an MCP client, AWS had to program an MCP server. Where as OpenAPI existed before the concept of agent tool calls, MCP did not.
That's why I said MCP is a lot of duplicate engineering effort for seemingly no gain. Preexisting API mechanisms can be used to provide LLM-oriented APIs, that's orthogonal to MCP-as-a-protocol. MCP is quite ugly as a protocol, and has very little reason to exist.
1 reply →