← Back to context

Comment by yuvrajangads

4 hours ago

I've been using MCP with Claude Code for a while now (Google Maps, Swiggy, Figma servers) and the local tool-use model works well because I control both sides. I pick which servers to trust, I see every tool call, and I can deny anything sketchy.

WebMCP flips that. The website exposes the tools and the browser decides what to call. The security model gets a lot harder when you're trusting random sites to define their own tool interfaces honestly. A malicious site could expose tools that look helpful but exfiltrate context from the agent's session.

Curious how they plan to sandbox this. The local MCP model works because trust is explicit. Not sure how that translates to the open web.

The threat model doesn't really change for agents that already have "web fetch" (or equivalent) enabled. The agent is free to communicate with untrusted websites[1]. As before, the firewall remains at what private information the agent is allowed to have.

[1] If anything the threat gets somewhat reduced by the ability to point directly at a trusted domain and say "use this site and it's (presumably) trusted tools."

  • Fair point about web fetch already being a trust boundary. The difference I see is that web fetch returns data, but WebMCP tools can define actions. A tool called "add_to_cart" is a lot more dangerous than fetching a product page. The agent trusts the tool's name and description to decide whether to call it, and that metadata comes from the site.

    But yeah, if you're already letting agents browse freely, the incremental risk might be smaller than I'm imagining.