Comment by daemonologist
7 days ago
Obvious example is a corporate chatbot (if it's using tools, probably for internal use). Non-technical users might be accessing it from a phone or locked-down corporate device, and you probably don't want to run a CLI in a sandbox somewhere for every session, so you'd like the LLM to interface with some kind of API instead.
Although, I think MCP is not really appropriate for this either. (And frankly I don't think chatbots make for good UX, but management sure likes them.)
Why are they not calling APIs directly with strictly defined inputs and outputs like every other internal application?
The story for MCP just makes no sense, especially in an enterprise.
MCP is an API with strictly defined inputs and outputs.
This is obviously not what it is. If I give you APIGW would you be able to implement an MCP server with full functionality without a large amount of middleware?
3 replies →
Does MCP support authentication, SSO?
5 replies →
MCP really only makes sense for chatbots that don’t want to have per session runtime environments. In that context, MCP makes perfect sense. It’s just an adapter between an LLM and an API. If you have access to an execution engine, then yes CLI + skills is superior.
Only is doing a lot of work here. There are tons of use cases aside from local coding assistants, e.g., non-code related domain specific agentic systems; these don’t even necessarily have to be chatbots.
1 reply →
actually local MCP just spawns a subprocess and talks via stdin/stdout.. same as CLI tool. Extra layer is only for remote case.
This might help if interested - https://vectree.io/c/implementation-details-of-stdio-and-sse...
> and you probably don't want to run a CLI in a sandbox somewhere for every session
You absolutely DO want to run everything related to LLMs in a sandbox, that's basic hygiene
You're missing their point, they're saying that you'd need a sandbox -> it'd be a pain -> you don't want to run a CLI _at all_