Comment by bustodisgusto
2 days ago
Again, that's up to the website owner. They can give the model anywhere from no access to full access to the client side api.
> The agent should be treated as an untrusted user in your client, given restricted privileges scoped to only the exact access they need to perform a given task
I agree, this is exactly what MCP-B does
The data you give it can be shared with any other website, at the agent's discretion. Some of it might be safe to share with the user, but not with third parties; at a minimum this should request permission when trying to share data between different websites/servers.
> at a minimum this should request permission when trying to share data between different websites/servers.
I don't see how you could possibly implement such a thing reliably. Do you scan all the parameters to other tool calls from different servers looking for something in a previous response? Even if you do that, the LLM could derive something private from a previous response that couldn't easily be detected. I suppose you could have an agent that tracks data flow in some way, but that's beyond the scope of MCP.
I don't think it is beyond the scope of MCP. Browsers have controls to prevent cross-origin data exposures, and this protocol is designed to bridge origins across a context that they all have access to. It's breaking the existing isolation mechanism. If you're building a system that breaks the existing security controls of the environment it's running in I think you have an architectural responsibility to figure out a way to solve for that.
Especially in this context, where decades have been spent building and improving same origin policy controls. The entire web has been built around the expectation that those controls prevent cross origin data access.
I also don't even think it's that difficult to solve. For one, data in the context window doesn't have to be a string, it can be an array of objects that contain the origin they were pulled from as metadata. Then you can provide selective content to different MCP-B interfaces depending on their origins. That would live in the protocol layer that would help significantly.
ah this is a great point, I will add it to the road map
I'm not following.
Say I have your browser extension running, and it's interfacing with an MCP-B enabled banking application using my session to access my data in that app.
I also have it connected to MCP-B enabled rogue web app that I mistakenly trust.
My browser has an entire architecture built around preventing data from crossing between those two origins, but what's stopping a malicious instruction from the rogue app asking the extension agent to include some data that it pulled into the context window from the banking app?
Further, when I use MCP in my IDE I have to deliberately provide that MCP server with a token or credentials to access a protected resource. With MCP-B, isn't it just automatically provided with whatever credentials are already stored in cookies/etc for a given MCP-B enabled app? If I load an MCP-B enabled app, does the agent automatically have access or do I have to configure it somewhere?
> If a website wants to expose a "delete all user data" tool, that's on them. It's no different than putting a big red delete button on the page.
It is different though, because the directive to push that button can come from somewhere other than the user, unless you've somehow solved prompt injection.
The point I'm driving toward is that I think you're violating the most common assumption of the web's long-standing security model, that data is protected from leaking cross origin by the browser. There's no SOP or CORS for your agent extension, and that's something that web apps have been built to expect. You're basically building an SOP bypass extension.
Ah I see. Yes this is a concern, but this issue is actually not unique to MCP-B and is just a generally issue with agentic workflows that rely on a dynamic toolset from 3p vendors. (which any MCP server local or remote has the ability to be)
> With MCP-B, isn't it just automatically provided with whatever credentials are already stored in cookies/etc for a given MCP-B enabled app?
Not exactly, MCP-B just allows your extension agent to call functions that the website owner explicitly exposes. The client itself is not given an credentials like traditional MCP.
> If I load an MCP-B enabled app, does the agent automatically have access or do I have to configure it somewhere?
Theres more in the blog post but how much access the agent has and how much human approval is needed to grant this access is completely up to the website creator.
FWIW your points are valid and MCP-B should enforce some guardrails when any domain shift happens via elicitation: https://modelcontextprotocol.io/specification/draft/client/e...
I'll add it to the road map. Thanks for bringing it up!
I do think the threat model here is a bit unique though.
If I'm running two MCP servers on my machine, I'm the one that installed them, I'm the one that assigned what permissions they have in my environment, and I'm the one that explicitly decided what level of access to give them within whatever resource they're accessing. That gives me reasonably strong control over, or at least full knowledge of, what data can be shared between them.
With MCP, I can use oauth to make very deliberate decisions about the scope of access I want to give the agent.
With MCP-B, it's the web application owner that installed the interface and what access it has to my data, and the agent running in my client gets access to whatever that third party deemed appropriate.
With MCP-B the agent has the same access I do by default, with the only restrictions being up to the app owner rather than it being up to me.
MCP auth is not perfect by any stretch, but the key thing it gives the user is the capacity to restrict what the agent has access to with some granularity. That's super important because the agent can't be trusted when it's consuming inputs the user didn't explicitly define. MCP-B doesn't have this, if you have the agent in your browser it has access to whatever resources you have so long as they were exposed by a tool call, which isn't somethign the user has any say in.
6 replies →