Show HN: Stasher – Burn-after-read secrets from the CLI, no server, no trust

5 days ago (github.com)

Stasher is a tiny CLI tool that lets you share encrypted secrets that burn after reading — no accounts, no logins, no servers to trust.

I built it because I just wanted to share a password. Not spin up infra. Not register for some "secure" web app. Not trust Slack threads. Just send a secret.

Secrets are encrypted client-side with AES-256-GCM. You get a `uuid:key` token to share. Once someone reads it, it's gone. If they don't read it in 10 minutes, it expires and deleted.

Everything is verifiable. Every release is signed, SLSA-attested, SBOM-included, and logged in the Rekor transparency log. Every line of code is public.

There's also a browser-based companion: https://app.stasher.dev — works in a sandboxed popup using the same encrypted model. Share from the terminal, pick up in the browser.

No data stored unencrypted. No metadata. No logs. No surveillance.

---

GitHub (CLI): https://github.com/stasher-dev/stasher-cli GitHub (App): https://github.com/stasher-dev/stasher-app API (Cloudflare Worker): https://github.com/stasher-dev/stasher-api CI/CD (Open): https://github.com/stasher-dev/stasher-ci NPM: https://www.npmjs.com/package/stasher-cli Website: https://stasher.dev Browser App: https://app.stasher.dev (runs in sandbox from https://dev.stasher)

Built with Cloudflare Workers, KV, and Durable Objects. All code open, auditable, and signed.

Try it:

```bash npx enstash "vault code is 1234#" npx destash "uuid:base64key"

thanks for reading

You built it because you wanted to share passwords:

And your flow is: I encrypt my password; I upload the encrypted password to your server.

And I share the password to the encrypted password as plain text.

Why do I have to upload the encrypted password to your server, and not just use signal disapearing messages, or telegram secure channel disappearing messages to share the encrypted password there.

And I can use any other side channel to share the second password, like whatsapp, or regular plain mail.

It feels to me that you made a two step process into a one step process but increased the risk by adding you in the middle.

Why would I offload my trust to you instead of doing the second step?

  • Your skepticism is valid and if your flow already includes: A secure messaging tool (e.g. Signal), a GPG workflow or local encryption or a team that uses shared password vaults. Then to be fair Stasher might not be better.

    I built Stasher for me. I wanted an easy, CLI-first way to share one-time secrets without worrying about accounts, apps, or trust. If Signal or GPG works better for you that’s totally cool.

    Stasher exists to make casual, secure sharing simpler not to replace tools you already trust.

    • Yes, valid, congratulations on shipping!

      It's just that the entry level for adopting a new tool (for other people) is:

      Convince my recipient to use this system instead of "Why not just send the password as we usually do on our secret chat."

      And then we spend 20 minutes talking about it and me advocating for their unknown and unaccountable creator.

I'm sorry, but I would never use this because of two major dealbreakers (and I would encourage others to exercise serious caution as well):

1. Code is largely if not entirely written by AI

2. Author is completely anonymous, using a dedicated GitHub and HN account for this specific project

Both of these are really bad for security-sensitive software.

  • I’d also add the language to the mix. I know you can write good code with TS/JS, but the dependency surface is just so large, I’m not comfortable with security code written in it yet (maybe at some point). Add that the repositories were created in the past week, so we can’t see the actual dev practice (was it all vibe coded? What bugs were there?).

    I hadn’t considered your second point, but even the authors GH account has an AI picture. I have no idea who this person is or what online/HN reputation they have.

  • Thanks for raising these concerns — totally fair in the context of security tools.

    I’m not anonymous, just cautious. I’m a solo builder, and this is a focused identity for the project. In fact, that's why I implemented full supply chain transparency from day one: signed releases, SLSA attestations, SBOMs, and Rekor logs. You don't need to trust me you can see the code for your self.

    Ultimately, you're right — if you can't verify it, you shouldn't trust it.

    That’s the whole point of the system: zero trust and verifiable cryptographic guarantees.

    Appreciate the scrutiny

    • A "focused identity" with no links to other identities is anonymous by definition.

      More importantly, this project is not "zero trust" and calling it such is borderline deceptive.

      I can verify the artifacts you're shipping contain the code in the repo (or I could just clone the repo myself), but I cannot automatically verify that your code is non-malicious and free of bugs. That is what I am trusting when using your software, and I have serious doubts about the "free of bugs" part for AI generated software.

      1 reply →

    • Cryptography/security is a trust business. Without some kind of personal (or even project) history, I know nothing about you or the project. And if I can’t verify you, I can’t trust you. The rest doesn’t matter much to me.

      But maybe that’s just me.

      7 replies →

I feel like I’d rather send “uuid:cipertext” so the cipertext never touches a server, but logically the security seems the same.

  • Hey. Only the ciphertext is stored on the server; the key never leaves your machine. The uuid:key format is just a pointer to the encrypted payload. Without the key, the server’s stash is useless. Zero-knowledge by design

I wish it was easier to run code in browser that you could know did not make any network connections, thinking mostly of the client creating secrets here.

  • whats preventing setting up a proxy (like mitmproxy, burpsuite interceptor) in the browser? pretty easy

    • That requires a dedicated instance of your browser as (AFAIK) most browsers don't support per-tab proxy configuration. If I understood correctly, parent wants tabs to work normally but offline tabs (like the secret generator) to be airgapped.

      1 reply →

    • Cumbersome but easy. I have no problem with it but I have a real problem teaching users about it.

>LLM generated

>Buzzwords

>Author's English (when not written by a LLM) sounds translated

Doesn't inspire confidence.

Use GPG, it's not difficult. For non-technical folks, use signal or disappearing messages. For slightly more secure comms with non-technical people, use a combination of rot13 / caesar / similar.

  • > Use GPG, it's not difficult.

    Heh, maybe not for you and me, but if you've never tried to coach a less-technical person through setting up GPG and then using to encrypt/decrypt, it's an eye-opening experience

  • >>Author's English (when not written by a LLM) sounds translated

    >Doesn't inspire confidence.

    I too have confidence only in projects made by those who English good. It's the reason why I'm estranged from my Swahili-speaking parents.

    • Low language fluency with a purported high social effect / professional pretense is a good indicator for fraud. ESLs often translate English from their native tongue (which carries over the structure of their native tongue) instead of speaking it "natively". The author is pretty abysmal, and this doesn't bode well for his technical ability.

The commit history and messages do not inspire confidence. Everything seems generated by AI. Both facts show that you don't seem to know what you are doing, which is not a good sign for a security tool.

wouldnt that command line output be sent to .bash_history of the logged in user?

you probably want to unset HISTFILE and then set +o history before sending these commands to bash

  • Great point — I’m planning to add a --stdin option explicitly for cases like this. Thanks for raising it. I will add to the readme in the meanwhile.

Don't pass secrets via CLI args. Not only do they end up in your shell history, but they can easily be grabbed just by inspecting the list of running processes.

And you've got all this "supply chain security" window dressing except nobody knows who you are and there's no community. So we have lots of records verifying that the published artifacts were authentically built by... someone... somehow.

This is AI slop, with a careless, checklisty, notion of what makes software secure.

The marketing language and actual design of the tool are also incoherent ("no server" and "no trust" both contradict how this thing actually works).

This post should probably be not just criticized, but flagged and removed.

Do I understand this correctly that the server here is only needed to make sure the secret it's only read once?

  • Yes, you are understanding it correctly, the server (Cloudflare Worker + Durable Object + KV) in Stasher is only needed to enforce the burn-after-read behaviour

What does "burn-after-read" mean? Just that it can't be retrieved a second time?

Why use this instead of GPG?

  • zero setup, burn after read, no key exchange required, GPG is ideal for persistent trust relationships (e.g., signing emails), Stasher is purpose-built for temporary relationships. To me GPG is overkill for sharing simple shares. Defo not trying to replace GPG, just a different use case.