← Back to context

Comment by sebastiennight

6 hours ago

I'm trying to wrap my head around this:

1. How are you defending against the case of one MCP poisoning your firewall LLM into incorrectly classifying other MCP tools?

2. How would you make sure the LLM shows the warning, as they are non-deterministic?

3. How clear do you expect MCP specs in order for your classification step to be trustworthy? To the best of my knowledge there is no spec that outlines how to "label" a tool for the 3 axes, so you've got another non-deterministic step here. Is "writing to disk" an external comm? It is if that directory is exposed to the web. How would you know?

Really good questions, let's look at them one by one:

1. We are assuming that the user has done their due diligence verifying the authenticity of the MCP server, in the same way they need to verify them when adding an MCP server to Claude code or VSCode. The gateway protects against an attacker exploiting already installed standard MCP servers, not against malicious servers.

2. That's a very good question - while it is indeed non-deterministic, we have not seen a single case of it not showing the message. Sometimes the message gets mangled but it seems like most current LLMs take the MCP output quite seriously since that is their source of truth about the real world. Also, while the message could in theory not be shown, the offending tool call will still be blocked so the worst case is that the user is simply confused.

3. Currently we follow the trifecta very literally, as in every tool is classified into a subset of {reads private data, writes on behalf of user, reads publicly modifiable data}. We have an LLM classify each tool at MCP server load time and we cache these results based on whatever data the MCP server sends us. If there are any issues with the classification, you can go into the gateway dashboard and modify it however you like. We are planning on making a improvements to the classification down the line but we think it is currently solid enough and we would like to get it into users' hands to get some UX feedback before we add extra functionality.