← Back to context

Comment by dymk

12 hours ago

MCP defines the API so vendors of LLM tools like cursor, claude code, codex etc don't all make their own bespoke, custom ways to call tools.

The main issue is the disagreement on how to declare the MCP tool exists. Cursor, vscode, claude all use basically the same mcp.json file, but then codex uses `config.toml`. There's very little uniformity in project-specific MCP tools as well, they tend to be defined globally.

Maybe this is a dumb question, but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?

  • >but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?

    Yes.

    It's not a dumb question. The situation is so dumb you feel like an idiot for asking the obvious question. But it's the right question to ask.

    Also you don't need to "train" the LLM on those resources. All major models have function / tool calling built in. Either create your own readme.txt with extra context or, if it's possible, update the API's with more "descriptive" metadata (aka something like swagger) to help the LLM understand how to use the API.

  • It is. Anthropic builds stuff like MCP and skills to try and lock people into their ecosystem. I'm sure they were surprised when MCP totally took off (I know I was).