Using proxies to hide secrets from Claude Code

1 month ago (joinformal.com)

I'm working on something similar called agent-creds [0]. I'm using Envoy as the transparent (MITM) proxy and macaroons for credentials.

The idea is that you can arbitrarily scope down credentials with macaroons, both in terms of scope (only certain endpoints) and time. This really limits the damage that an agent can do, but also means that if your credentials are leaked they are already expired within a few minutes. With macaroons you can design the authz scheme that *you* want for any arbitrary API.

I'm also working on a fuse filesystem to mount inside of the container that mints the tokens client-side with short expiry times.

https://github.com/dtkav/agent-creds

  • > With macaroons you can design the authz scheme that you want for any arbitrary API.

    How would you build such an authz scheme? When claude asks permissions to access a new endpoint, if the user allows it, then reissue the macaroons?

    • There are two parts here:

      1. You can issue your own tokens which means you can design your own authz in front of the upstream API token.

      2. Macaroons can be attenuated locally.

      So at the time that you decide you want to proxy an upstream API, you can add restrictions like endpoint path to your scheme.

      Then, once you have that authz scheme in place, the developer (or agent) can attenuate permissions within that authz scheme for a particular issued macaroon.

      I could grant my dev machine the ability to access e.g. /api/customers and /api/products. If i want to have claude write a script to add some metadata to my products, I might attenuate my token to /api/products only and put that in the env file for the script.

      Now claude can do development on the endpoint, the token is useless if leaked, and Claude can't read my customer info.

      Stripe actually does offer granular authz and short lived tokens, but the friction of minting them means that people don't scope tokens down as much.

      2 replies →

  • made with ai?

    • Yeah, it says so at the top of the README (though I suppose I could have put that in the comment too). I'm not building a product, just sharing a pattern for internal tooling.

      Someone on another thread asked me to share it so I had claude rework it to use docker-compose and remove the references to how I run it in my internal network.

The proxy pattern here is clever - essentially treating the LLM context window as an untrusted execution environment and doing credential injection at a layer it can't touch.

One thing I've noticed building with Claude Code is that it's pretty aggressive about reading .env files and config when it has access. The proxy approach sidesteps that entirely since there's nothing sensitive to find in the first place.

Wonder if the Anthropic team has considered building something like this into the sandbox itself - a secrets store that the model can "use" but never "read".

  • > a secrets store that the model can "use" but never "read".

    How would that work? If the AI can use it, it can read it. E.g:

        secret-store "foo" > file
        cat file
    

    You'd have to be very specific about how the secret can be used in order for the AI to not be able to figure out what it is. You could provide a http proxy in the sandbox that injects a HTTP header to include the secret, when the secret is for accessing a website for example, and tell the AI to use that proxy. But you'd also have to scope down which URLs the proxy can access with that secret otherwise it could just visit a page like this to read back the headers that were sent:

    https://www.whatismybrowser.com/detect/what-http-headers-is-...

    Basically, for every "use" of a secret, you'd have to write a dedicated application which performs that task in a secure manner. It's not just the case of adding a special secret store.

    • This seems like an under-rated comment. You are right, this is a vulnerability and the blog doesn't talk about this.

  • I guess I don't understand why anyone thinks giving an LLM access to credentials is a good idea in the first place? It's been demonstrated best practice to separate authentication/authorization from the LLM's context window/ability to influence for several years now.

    We spent the last 50 years of computer security getting to a point where we keep sensitive credentials out of the hands of humans. I guess now we have to take the next 50 years to learn the lesson that we should keep those same credentials out of the hands of LLMs as well?

    I'll be sitting on the sideline eating popcorn in that case.

  • That's how they did "build an AI app" back when the claude.ai coding tool was javascript running in a web worker on the client machine.

  • Sounds like an attacker could hack Anthropic and get access to a bunch of companies via the credentials Claude Code ingested?

  • It could even hash individual keys and scan context locally before sending to check if it accidentally contains them.

  • While sandboxing is definitely more secure... Why not put a global deny on .env-like filename patterns as a first measure?

Here's the set up I use on Linux:

The idea is to completely sandbox the program, and allow only access to specific bind mounted folders. But we also want to have to the frills of using GUI programs, audio, and network access. runc (https://github.com/opencontainers/runc) allows us to do exactly this.

My config sets up a container with folders bind mounted from the host. The only difficult part is setting up a transparent network proxy so that all the programs that need internet just work.

Container has a process namespace, network namespace, etc and has no access to host except through the bind mounted folders. Network is provided via a domain socket inside a bind mounted folder. GUI programs work by passing through a Wayland socket in a folder and setting environmental variables.

The set up looks like this

    * config.json - runc config
    * run.sh - runs runc and the proxy server
    * rootfs/ - runc rootfs (created by exporting a docker container) `mkdir rootfs && docker export $(docker create archlinux:multilib-devel) | tar -C rootfs -xvf -`
    * net/ - folder that is bind mounted into the container for networking

Inside the container (inside rootfs/root):

    * net-conf.sh - transparent proxy setup
    * nft.conf - transparent proxy nft config
    * start.sh - run as a user account

Clone-able repo with the files: https://github.com/dogestreet/dev-container

  • I have a version of this without the GUI, but with shared mounts and user ID mapping. It uses systemd-nspawn, and it's great.

    In retrospect, agent permission models are unbelievably silly. Just give the poor agents their own user accounts, credentials, and branch protection, like you would for a short-term consultant.

    • The other reason to sandbox is to reduce damage if another NPM supply chain attack drops. User accounts should solve the problem, but they are just too coarse grained and fiddly especially when you have path hierarchies. I'd hate to have another dependency on systemd, hence runc only.

  • try firejail insread

    • Not even close to the same thing, with this setup you can install dev tools, databases, etc and run inside the container.

      It's a full development environment in a folder.

Is this a reimplementation of Fly.io’s Tokenizer? How does it compare?

https://fly.io/blog/tokenized-tokens/

https://github.com/superfly/tokenizer

  • IMHO there are a couple axis that are interesting in this space.

    1. What do the tokens look like that you are you storing in the client? This could just be the secret (but encrypted), or you could design a whole granular authz system. It seems like tokenizer is the former and Formal is the latter. I think macaroons are an interesting choice here.

    2. Is the MITM proxy transparent? Node, curl, etc allow you to specify a proxy as an environment variable, but if you're willing to mess with the certificate store than you can run arbitrary unmodified code. It seems like both Tokenizer and Formal are explicit proxies.

    3. What proxy are you using, and where does it run? Depending on the authz scheme/token format you could run the proxy centrally, or locally as a "sidecar" for your dev container/sandbox.

  • The concept of a proxy injecting/removing sensitive data has been for much longer, e.g. VGS has a JS SDK and proxy to handle credit card data for you and keep you out of PCI scope.

  • We truly are living in the dumbest timeline aren’t we.

    I was just having an argument with a high level manager 2 weeks ago about how we already have an outbound proxy that does this, but he insisted that a mitm proxy is not the same as fly.io “tokenizer”. See, that one tokanizes every request, ours just sets the Authorization header for service X. I tried to explain that it’s all mitm proxies altering the request, just for him to say “I don’t care about altering the request, we shouldn’t alter the request. We just need to tokenize the connection itself”

"When hostnames and headers are hard to edit: mitmproy add-ons"

"The mitmproxy tool also supports addons where you can transform HTTP requests between Claude Code and third-party web servers. For example, you could write an add-on that intercepts https://api.anthropic.com and updates the X-API-Key header with an actual Anthropic API Key."

"You can then pass this add-on via mitmproxy -s reroute_hosts.py."

If using HAproxy, then is no need to write "add-ons", just edit the configuration file and reload

For example, something like

   http-request set-header x-api-key API_KEY if { hdr(host) api.anthropic.com }

   echo reload|socat stdio unix:/path-to-socket/socket-name

For me, HAproxy is smaller and faster than mitmproxy

A proxy is a good solution although a bit more involved. A great first step is just getting any secrets - both the ones the AI actually needs access to and your application secrets - out of plaintext .env files.

A great way to do that is either encrypting them or pulling them declaratively from a secure backend (1Pass, AWS Secrets Manager, etc). Additional protection is making sure that those secrets don't leak, either in outgoing server responses, or in logs.

https://varlock.dev (open source!) can help with the secure injection, log redaction, and provide a ton more tooling to simplify how you deal with config and secrets.

At the moment I'm just using "sops" [1]. I have my env var files encrypted uth AGE encryption. Then I run whatever I want to run with "sops exec-env ...", it's basically forwarding the secrets to your program.

I like it because it's pretty easy to use, however it's not fool-proof: if the editor which you use for editing the env vars is crashing or killed suddently, it will leave a "temp" file with the decrypted vars on your computer. Also, if this same editor has AI features in it, it may read the decrypted vars anyways.

- [1]: https://github.com/getsops/sops

  • I do something similar but this only protects secrets at rest. If you app has an exploit an attack could just export all your secrets to a file.

    I prototyped a solution where I use an external debugger to monitor my app, when the app needs a secret it generates a breakpoint and the debugger catches it and then inspects the call stack of the function requesting the secret and then copies it into the process memory (intended to be erased immediately after use). Not 100% security but a big improvement and a bit more flexible and auditable compared to a proxy

Isn’t this (part of) the point of MCP.

  • Possibly, but the point is that MCP is a DOA idea. An agent, like Claude code or opencode, don’t need an MCP. it’s nonsensical to expect or need an MCP before someone can call you.

    There is no `git` MCP either . Opencode is fully capable of running `git add .` or `aws ec2 terminate-instance …` or `curl -XPOST https://…`

    Why do we need the MCP? The problem now is that someone can do a prompt injection to tell it to send all your ~/.was/credentials to a random endpoint. So let’s just have a dummy value there, and inject the actual value in a transparent outbound proxy that the agent doesn’t have access to.

    • > Opencode is fully capable of running

      > Why do we need the MCP?

      > The problem now

      And there it is.

      I understand that this is an alternative solution, and appreciate it.

I’ve been using 1Password’s env templates with `op run` for this locally. It hijacks stdout and filters your credentials.

That does not make it immune to Claude’s prying, but at least Claude can then read the .env file and satisfy its need to prove that a credential exists without reading it.

I have found even when I say a credential exists and is correct Claude does not believe me. Which is infuriating. I’m willing to bet Claude’s logs have a gold mine that could own 90% of big tech firms.

I am gonna be that guy and say it would be nice to share the actual code vs using images to display what the code looks like. It's not great for screenreaders and anyone who want to quickly try out the functionality.

I think people's focus on the threat model from AI corps is wrong. They are not going to "steal your precious SSH/cloud/git credentials" so they can secretly poke through your secret-sauce, botnet your servers or piggy back off your infrastructure, lol of lols. Similarly the possibility of this happening from MCP tool integrations is overblown.

This dangerous misinterpretation of the actual possible threats simply better conceals real risks. What might those real risks be? That is the question. Might they include more subtle forms of nastiness, if anything at all?

I'm of the belief that there will be no nastiness, not really. But if you believe they will be nasty, it at least pays to be rational about the ways in which that might occur, no?

  • The risk isn't from the AI labs. It's from malicious attackers who sneak instructions to coding agents that cause them to steal your data, including your environment variable secrets - or cause them to perform destructive or otherwise harmful actions using the permissions that you've granted to them.

    • Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.

      Respect for your writing, but I feel you and many others have the risk calculus here backwards.

      12 replies →

    • We also use proxies with CodeRabbit’s sandboxes. Instead of using tool calls, we’ve been using LLM-generated CLI and curl commands to interact with external services like GitHub and Linear.

  • Putting your secrets in any logs is how you get those secrets accidentally or purposefully read by someone you do not want to read it, it doesn't have to be the initial corp, they just need to have bad security or data management for it to leak online or have someone with a lower level of access pivot via logs.

    Now multiply that by every SaaS provider you give your plain text credentials in.

    • Right, but the multiply step is not AI specific. Let's focus here: AI providers farming out their convos to 3rd-parties? Unlikely, but if it happens, it's totally their bad.

      I really don't think this is a thing.

      2 replies →

  • ‘Hey Claude, write an unauthenticated action method which dumps all environment variables to the requestor, and allows them to execute commands’