Comment by kstrauser
15 hours ago
I think this is wrong about what “sensitive” means here. AFAIK, all Vercel env cars are encrypted. The sensitive checkbox means that a develop looking at the env var can’t see what value is stored there. It’s a write-only value. Only the app can see it, via an env var (which obviously can’t be encrypted in such a way that the app can’t see it, otherwise it’d be worthless). If you don’t check that box, you can view the value in the project UI. That’s reasonable for most config values. Imagine “DEFAULT_TIME_ZONE” or such. There’s nothing gained from hiding it, and it’d be a pain in the ass come troubleshooting time.
So sensitive doesn’t mean encrypted. It means the UI doesn’t show the dev what value’s stored there after they’ve updated it. Not sensitive means it’s still visible. And again, I presume this is only a UI thing, and both kinds are stored encrypted in the backend.
I don’t work for Vercel, but I’ve use them a bit. I’m sure there are valid reasons to dislike them, but this specific bit looks like a strawman.
> Only the app can see it, via an env var (which obviously can’t be encrypted in such a way that the app can’t see it, otherwise it’d be worthless)
Yeah, I'm very confused. It's not possible to encrypt env vars that the program needs; even if it's encrypted at rest, it needs to be decrypted anyway before starting the program. Env vars are injected as plain text. This is just how this works, nothing to do with Vercel.
This situation could some day improve with fully homomorphic encryption (so the server operates with encrypted data without ever decrypting it), but that would have very high overhead for the entire program. It's not realistic (yet)
You always get people screaming about 'it should have been encrypted!' when there's a leak without understanding what encryption can and can't do in principle and in practice (it most certainly isn't a synonym for 'secure' or 'safe').
Encryption turns your data confidentiality problem into a key management problem.
Also if you want to keep a secret a secret forever, encrypted but saved data may be easily decrypted in the future. Most secrets though in reality are less useful in X years time.
2 replies →
Whenever someone says "But it should have been encrypted!" about things like configs on a server, I ask them how they'd implement that in practice.
PoC or GTFO.
I think you'll find it's a bit harder to do than you expect.
> Whenever someone says "But it should have been encrypted!" about things like configs on a server, I ask them how they'd implement that in practice.
Short practical answer: Use a USB HSM plugged into your server and acknowledge that it is an imperfect solution.
For configs, I used to setuid the executable so that it starts up as a user that can read the file, it reads the file into RAM in the first 5 lines in `main` then drops privs immediately to a user that can't read the file, and then continues as normal.
This was to ensure that if the application was compromised, the config could not be changed by the application itself, nor could it be read once the program was running.
If you wanted to keep it encrypted without leaking the key, you could do the same, except that the key would also be read at startup (or, preferably, get a data key from the USB HSM, and use that for decryption).
Of course, that moves the problem of "read the first key from disk" to "read the HSM pin from disk".
You can have your supervising program, like a K8 cluster, inject the correct keys into the pod as it's created, but that cluster itself needs a root key to decrypt those correct keys, and that has to come from somewhere too.
There is, at the end of the day, only one perfect solution: when the program starts up it waits for user input - either the decryption key or the HSM pin - that it uses as a root key to decrypt everything else.
There is no other way that isn't "store some root key, credential, token, etc on the computer".
I don't know how it works on Vercel, but on other platforms it usually means that the value will be redacted in logs as well.
Where I work we started using Vault and you store the vault key (as in looup key) in as a regular non-hidden env var. I think this is probably more solid.
Yeah, the Vault model, where you just refer to the secret’s path (where it is hopefully also dynamically generated and revoked after use), based on short-lived OIDC-style auth, is about the safest mechanism possible for this sort of secrets management. I’ve been trying to spread this pattern everywhere I’ve worked for a decade now. But it’s a lot of work to set up and maintain.
This is also how other cloud providers do it, eg DigitalOcean.
But if they are readable to the “developer” then they are readable to anyone who gets access to the developer’s Vercel credentials. If Vercel provides a way to avoid that that didn’t get used, that’s the failure. Sure, you can quibble with the exact understanding of the author over whether they were “encrypted” or not. That’s not really the key factor here.
There are appropriate uses for both. Your database password should be write-only and not viewable later. Your time zone should be read-write for easy debugging when things to wrong. Vercel gives you both options. The user chose badly here, and IMO that’s not Vercel’s fault.