Comment by ctmnt
4 days ago
This suffers from all the usual flaws of env variable secrets. The big one being that any other process being run by the same user can see the secrets once “injected”. Meaning that the secrets aren’t protected from your LLM agent at all.
So really all you’re doing is protecting against accidental file ingestion. Which can more easily be done via a variety of other methods. (None of which involve trusting random code that’s so fresh out of the oven its install instructions are hypothetical.)
There are other mismatches between your claims / aims and the reality. Some highlights: You’re not actually zeroizing the secrets. You call `std::process::exit()` which bypasses destructors. Your rotation doesn’t rotate the salt. There are a variety of weaknesses against brute forcing. `import` holds the whole plain text file in memory.
Again, none of these are problems in the context of just preventing accidental .env file ingestion. But then why go to all this trouble? And why make such grand claims?
Stick to established software and patterns, don’t roll your own. Also, don’t use .env if you care about security at all.
My favorite part: I love that “wrong password returns an error” is listed as a notable test. Thanks Claude! Good looking out.
This is amazing. I agree with your take except "You’re not actually zeroizing the secrets"... I think it is actually calling zeroize() explicitly after use.
Can I get your review/roast on my approach with OrcaBot.com? DM me if I can incentivize you.. Code is available:
https://github.com/Hyper-Int/OrcaBot
enveil = encrypt-at-rest, decrypt-into-env-vars and hope the process doesn't look.
Orcabot = secrets never enter the LLM's process at all. The broker is a separate process that acts as a credential-injecting reverse proxy. The LLM's SDK thinks it's talking to localhost (the broker adds the real auth header and forwards to the real API). The secret crosses a process boundary that the LLM cannot reach.
I think we're both right about zeroize. Added a reply to clarify. In short, yes, the key and password are getting zeroized, but not the actual secrets. Which seems like the thing that matters in this context, at least given the tool's stated aims.
OrcaBot: There's a lot there! Ambitious project. Cute name, who doesn't love orcas? I don't see anything screamingly bad, of the variety that would inspire me to write essays about random people's code.
Some thoughts: The line between dev mode and production is a bit thin and lightly enforced. Given the overall security approach, you could firm that up. The within-VM shared workspace undermines the isolated PTYs. If your rate-limiting middleware fails, you allow all requests through. `SECRETS_ENCRYPTION_KEY` is the one ring and it doesn't have any versioning or rotation mechanisms.
In general it seems like a good approach! But there are spots where one thing being misconfigured could blow the entire system open. I suggest taking a pass through it with that in mind. Good luck.
Thank you! Great feedback. Again agree with you on all points. Will take it onboard!
To be clear: `zeroize()` is called, but only on the key and password. Which is what the docs say, so I was being unfair when I lumped that under grand claims not being met. However! The actual secrets are never zeroized. They're loaded into plain `String` / `HashMap<String, String>`.
Again, not actually a problem in practice if all you're doing is keeping yourself from storing your secrets in plain text on your disk. But if that's all you care about, there are many better options available.
What is your recommended alternative to .env files?
In the context of traditional SaaS, using dynamic secrets loaded at runtime (KMS+Dynamo, etc.).
For agentic tools and pure agents, a proxy is the safest approach. The agent can even think it has a real API key, but said key is worthless outside of the proxy setting.
It suprises me how often I see some Dockerfile, Helm, Kubernetes, Ansible etc write .env files to disk in some production-alike environment.
The OS, especially linux - most common for hosting production software - is perfectly capable of setting and providing ENV vars. Almost all common devops and older sysadmin tooling can set ENV vars. Really no need to ever write these to disk.
I think this comes from unaware developers that think a .env file, and runtime logic that reads this file (dotenv libs) in the app are required for this to work. I certainly see this misconception a lot with (junior) developers working on windows.
- you don't need dotenv libraries searching files, parsing them, etc in your apps runtime. Please just leave it to the OS to provide the ENV vars and read those, in your app.
- Yes, also on your development machine. Plenty of tools from direnv to the bazillion "dotenv" runners will do this for you. But even those aren't required, you could just set env vars in .bashrc, /etc/environment (Don't put them there, though) etc.
- Yes, even for windows, plenty of options, even when developers refuse to or cannot use wsl. Various tools, but in the end, just `set foo=bar`.
4 replies →
These are from AWS right, what about simple, no cloud setups with just docker compose or even bare proccesses on a VPS?
1 reply →
Really depends on your threat model and use case. The problems with .env files: plain text on disk, no access control, no rotation mechanism, no audit trail, trivial to leak accidentally, secrets go into env variables (which are exposed and often leak). Which of those do you care about? What are you trying to prevent?
At the simplest level, keeping .env-ish files, use sops + age [1] or dotenvx [2] (or similar) to encrypt just the values. You keep the .env file approach, the actual secrets are encrypted, and now you can check the file in and track changes without leaking your secrets. You still have the env variable problems.
There are some options that'll use virtual files to get your secrets from a vault to your process's env variables, or you can read the secrets from a secret manager yourself into env variables, but that feels like more complexity without a lot more gain to me. YMMV.
You could use a regular password manager (your OS's keychain, 1Password and its ilk, etc) if you're just working on your own. Also in the more complexity without much gain category for me.
If you want to use a local file on disk, you could use a config file with locked down permissions, so at least it's not readable by anything that comes along. ssh style.
Better is to have your code (because we're talking about your code, I assume) read from secret managers itself. Whether that's Bitwarden, AWS / GCP / Azure (well, maybe not Azure), Hashicorp, or one of the many other enterprisey options. That way you get an audit trail and easy rotation, plus no env variables and no plain text at rest. You can still leak them, but you have fewer ways to do so.
Speaking of leaking accidentally, the two most common paths: Logging output and Docker files. The first is self explanatory, though don't forget about logging HTTP requests with auth headers that you don't want exposed. The second is missed by a lot of people. If you inject secrets into your Dockerfile via `ARG` or `ENV` that gets baked into the image and is easy to get back out. Use `--mount-type=secret` etc. (Never use the old Docker base64 stored secrets in config. That's just silly.)
There are other permutations and in-between steps, these are just the big ones. Like all security stuff, the details really depend on your specific needs. It is easy to say, though, that plain text .env files injected into env variables are at the bad end of the spectrum. Passing the secrets in as plain text args on the command line is worse, so at least you're not doing that!
1: https://github.com/getsops/sops / https://github.com/FiloSottile/age
2: https://dotenvx.com
This is a great breakdown. Particularly the point about Docker ARG/ENV baking secrets into images — that catches so many teams.
On the "read from secret managers directly" option — that's the ideal but the friction is what kills adoption. Most small teams look at Vault's setup guide and go back to .env files. Doppler and Infisical lowered that bar but they're still priced for enterprise ($18/user/mo for Doppler's team plan).
I've been building secr (https://secr.dev) to try to hit the sweet spot: real encryption (AES-256-GCM, envelope encryption, KMS-wrapped keys) with a CLI that feels as simple as dotenv. secr run -- npm start and your app reads process.env like normal. Plus deployment sync so you can secr push --target render instead of copy-pasting into dashboards.
The env variable leakage problem you mention is real and something I don't think any tool fully solves without the proxy approach hardsnow described. But removing the plaintext-file-on-disk vector and the sharing-over-Slack vector covers the majority of real-world leaks.
[dead]