Comment by nestorD
8 hours ago
So far I have seen two genuinely good arguments for the use of MCPs:
* They can encapsulate (API) credentials, keeping those out of reach of the model,
* Contrary to APIs, they can change their interface whenever they want and with little consequences.
> * Contrary to APIs, they can change their interface whenever they want and with little consequences.
I already made this argument before, but that's not entirely right. I understand that this is how everybody is doing it right now, but that in itself cause issues for more advanced harnesses. I have one that exposes MCP tools as function calls in code, and it encourages the agent to materialize composed MCP calls into scripts on the file system.
If the MCP server decides to change the tools, those scripts break. That is is also similar issue for stuff like Vercel is advocating for [1].
[1]: https://vercel.com/blog/generate-static-ai-sdk-tools-from-mc...
But …you have to give the MCP the creds somehow. Maybe it’s via a file on disk (bad), maybe via an env var (less bad). Maybe you do it via your password CLI that you biometricly auth to, which involves a timeout of some sort for security, but that often means you can’t leave an agent unattended.
In any case, how is any of this better than a CLI? CLIs have the same access models and tradeoffs, and a persistent agent will plumb the depths of your file system and environment to find a token to do a thing if your prompt was “do a thing, use tool/mcp/cli”.
So where is this encapsulation benefit?
mcp is easy to self-host. model? a little less so.
What's the alternative design where the model has access to API credentials?
> What's the alternative design where the model has access to API credentials?
All sorts of ways this can happen but it usually boils down to leaving them on disk or in an environment variable in the repo/dir(s) where the agent is operating in.
what about things like rate limiting, how are those implemented, any Goodreads
[dead]