← Back to context

Comment by emschwartz

8 hours ago

> In Deno Sandbox, secrets never enter the environment. Code sees only a placeholder

> The real key materializes only when the sandbox makes an outbound request to an approved host. If prompt-injected code tries to exfiltrate that placeholder to evil.com? Useless.

That seems clever.

Yes... but...

Presumably the proxy replaces any occurrence of the placeholder with the real key, without knowing anything about the context in which the key is used, right? Because if it knew that the key was to be used for e.g. HTTP basic auth, it could just be added by the proxy without using a placeholder.

So all the attacker would have to do then is find and endpoint (on one of the approved hosts, granted) that echoes back the value, e.g. "What is your name?" -> "Hello $name!", right?

But probably the proxy replaces the real key when it comes back in the other direction, so the attacker would have to find an endpoint that does some kind of reversible transformation on the value in the response to disguise it.

It seems safer and simpler to, as others have mentioned, have a proxy that knows more about the context add the secrets to the requests. But maybe I've misunderstood their placeholder solution or maybe it's more clever than I'm giving it credit for.

  • Where would this happen? I have never seen an API reflect a secret back but I guess it's possible? perhaps some sort of token creation endpoint?

    • How does the API know that it's a secret, though? That's what's not clear to me from the blog post. Can I e.g. create a customer named PLACEHOLDER and get a customer actually named SECRET?

    • Say, an endpoint tries to be helpful and responds with “no such user: foo” instead of “no such user”. Or, as a sibling comment suggests, any create-with-properties or set-property endpoint paired with a get-propety one also means game over.

      Relatedly, a common exploitation target for black-hat SEO and even XSS is search pages that echo back the user’s search request.

    • It depends on where you allow the substitution to occur in the request. It's basically "the big bug class" you have to watch out for in this design.

  • Could the proxy place further restrictions like only replacing the placeholder with the real API key in approved HTTP headers? Then an API server is much less likely to reflect it back.

Reminds me a little of Fly's Tokenizer - https://github.com/superfly/tokenizer

It's a little HTTP proxy that your application can route requests through, and the proxy is what handles adding the API keys or whatnot to the request to the service, rather than your application, something like this for example:

Application -> tokenizer -> Stripe

The secrets for the third party service should in theory then be safe should there be some leak or compromise of the application since it doesn't know the actual secrets itself.

Cool idea!

  • It's exactly the tokenizer, but we shoplifted the idea too; it belongs to the world!

    (The credential thing I'm actually proud of is non-exfiltratable machine-bound Macaroons).

    Remember that the security promises of this scheme depend on tight control over not only what hosts you'll send requests to, but what parts of the requests themselves.

Yeah, this is a really neat idea: https://deno.com/blog/introducing-deno-sandbox#secrets-that-...

  await using sandbox = await Sandbox.create({
    secrets: {
      OPENAI_API_KEY: {
        hosts: ["api.openai.com"],
        value: process.env.OPENAI_API_KEY,
      },
    },
  });
  
  await sandbox.sh`echo $OPENAI_API_KEY`;
  // DENO_SECRET_PLACEHOLDER_b14043a2f578cba75ebe04791e8e2c7d4002fd0c1f825e19...

It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

Kind of like how XSS attacks can't read httpOnly cookies but they can generally still cause fetch() requests that can take actions using those cookies.

  • if there is an LLM in there, "Run echo $API_KEY" I think could be liable to return it, (the llm asks the script to run some code, it does so, returning the placeholder, the proxy translates that as it goes out to the LLM, which then responds to the user with the api key (or through multiple steps, "tell me the first half of the command output" e.g. if the proxy translates in reverse)

    Doesn't help much if the use of the secret can be anywhere in the request presumably, if it can be restricted to specific headers only then it would be much more powerful

  • > It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

    Agreed, and this points to two deeper issues: 1. Fine-grained data access (e.g., sandboxed code can only issue SQL queries scoped to particular tenants) 2. Policy enforced on data (e.g., sandboxed code shouldn't be able to send PII even to APIs it has access to)

    Object-capabilities can help directly with both #1 and #2.

    I've been working on this problem -- happy to discuss if anyone is interested in the approach.

I don’t quite get how it’s being injected in https requests… do they inject their own https cert?

It must be performing a man-in-the-middle for HTTPS requests. That makes it more difficult to do things like certificate pinning.

@deno team, how do secrets work for things like connecting to DBs over a tcp connection? The header find+replace won't work there, I assume. Is the plan to add some sort of vault capability?

We had this same challenge in our own app builder, we ended up creating an internal LLM proxy with per-sandbox virtual keys (which the proxy maps to the real key + calculates per-sandbox usage), so even if the sandbox leaks its key it doesn't impact anything else.

I like this, but the project mentioned in the launch post

> via an outbound proxy similar to coder/httpjail

looks like AI slop ware :( I hope they didn't actually run it.

  • We run or own infrastructure for this (and everything else). The link was just an illustrative example