Bitwarden CLI compromised in ongoing Checkmarx supply chain campaign

15 hours ago (socket.dev)

Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?

Setting min-release-age=7 in .npmrc (needs npm 11.10+) would have protected the 334 unlucky people who downloaded the malicious @bitwarden/cli 2026.4.0, published ~19+ hours ago (see https://news.ycombinator.com/item?id=47513932):

  ~/.npmrc
  min-release-age=7 # days

  ~/Library/Preferences/pnpm/rc
  minimum-release-age=10080 # minutes

  ~/.bunfig.toml
  [install]
  minimumReleaseAge = 604800 # seconds

  # not related to npm, but while at it...
  ~/.config/uv/uv.toml
  exclude-newer = "7 days"

p.s. shameless plug: I was looking for a simple tool that will check your settings / apply a fix, and was surprised I couldn't find one, I released something (open source, free, MIT yada yada) since sometimes one click fix convenience increases the chances people will actually use it. https://depsguard.com if anyone is interested.

EDIT: looks like someone else had a similar idea: https://cooldowns.dev

  • I like the idea of a cool down. But my next question is would this have been caught if no one updated? I know in practice not everyone would be on a cool down. But presumably this comprise was only found out because a lot of people did update.

    • > presumably this comprise was only found out because a lot of people did update

      This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.

      But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.

      3 replies →

    • That assumes discovering a security bug is random and it could happen to anyone, so more shots on goal is better. But is that a good way to model it?

      Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.

      A cooldown gives anyone who does investigate more time to do their work.

    • Cooldown sounds like a good idea ONLY IF these so called security companies can catch these malicious dependencies during the cooldown period. Are they doing this bit or individual researchers find a malware and these companies make headlines?

      3 replies →

    • It's a trade off for sure, maybe if companies could have "honeypot" environments where they update everything and deploy their code, and try to monitor for sneaky things.

      1 reply →

    • If I were in charge of a package manager I would be seriously looking into automated and semi automated exploit detection so that people didn't have to yolo new packages to find out if they are bad. The checking would itself become an attack vector, but you could mitigate that too. I'm just saying _something_ is possible.

  • > Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?

    Most of these attacks don't make it into the upstream source, so solutions[1] that build from source get you ~98% of the way there. If you can't get a from-source build vs. pulling directly from the registries, can reduce risk somewhat with a cooldown period.

    For the long tail of stuff that makes it into GitHub, you need to do some combination of heuristics on the commits/maintainers and AI-driven analysis of the code change itself. Typically run that and then flag for human review.

    [1] Here's the only one I know that builds everything from source: https://www.chainguard.dev/libraries

    (Disclaimer: I work there.)

  • > Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?

    With pnpm, you can also use trustPolicy: no-downgrade, which prevents installing packages whose trust level has decreased since older releases (e.g. if a release was published with the npm cli after a previous release was published with the github OIDC flow).

    Another one is to not run post-install scripts (which is the default with pnpm and configurable with npm).

    These would catch most of the compromised packages, as most of them are published outside of the normal release workflow with stolen credentials, and are run from post-install scripts

  • Cooldowns are passing the buck. These are all caught with security scanning tools, and AI is probably going to be better at this than people going forward, so just turn on the cooldowns server-side. Package updates go into a "quarantine" queue until they are scanned. Only after scanning do they go live.

    • "Just" is doing a lot of work; most ecosystems are not set up or equipped to do this kind of server-side queuing in 2026. That's not to say that we shouldn't do this, but nobody has committed the value (in monetary and engineering terms) to realizing it. Perhaps someone should.

      By contrast, a client-side cooldown doesn't require very much ecosystem or index coordination.

      2 replies →

    • The approach you outline is totally compatible with an additional one or two day time gate for the artifact mirrors that back prod builds. Deploy in locked-down non-prod environments with strong monitoring after the scans pass, wait a few days for prod, and publicly report whatever you find, and you're now "doing your part" in real-time while still accounting for the fallibility of your automated tools.

      There's risk there of a monoculture categorically missing some threats if everyone is using the same scanners. But I still think that approach is basically pro-social even if it involves a "cooldown".

    • I agree, even without project glasswing (that Microsoft is part of) even with cheaper models, and Microsoft's compute (Azure, OpenAI collaboration), it makes no sense that private companies needs to scan new package releases and find malware before npm does. I'm sure they have some reason for it (people rely on packages to be immediately available on npm, and the real use case of patching a zero day CVE quickly), but until this is fixed fundamentally, I'd say the default should be a cooldown (either serverside or not) and you'll need to opt in to get the current behavior. This might takes years of deprecation though, I'm sure it was turned on now, a lot of things would break. (e.g. every CVE public disclosure will also have to wait that additional cooldown... and if Anthropic are not lying, we are bound for a tsunami of patched CVEs soon...)

      1 reply →

  • Regarding doing more than just a minimum release age: The tool I personally use is Aikido "safe-chain". It sets minimum release age, but also provides a wrapper for npm/uv/etc where upon trying to install anything it first checks each dependency for known or suspected vulnerabilities against an online commercial vulnerability database.

  • Stop using Javascript. Or Typescript or whatever excuses they have for the fundamentally flawed language that should have been retired eons ago instead of trying to get it fixed. Javascript, its ecosystem has always been a pack of cards. Time and again it has been proven again. I think this is like the 3rd big attack in the last 30 days alone.

  • > ~/.config/uv/uv.toml > exclude-newer = "7 days"

    Note the if you get

       failed to parse year in date "7 days": failed to parse "7 da" as year (a four digit integer): invalid digit, expected 0-9 but got
    

    then comment out the exclude and run

      uv self update

  • compartmentalize. I do development and anything finance / crypto related / sensitive on separate machines.

    If you're brave you can run whonix.

    The issue is developers who have publish access to popular packages - they really should be publishing and signing on a separate machine / environment.

    Same with not doing any personal work on corporate machines (and having strict corp policy - vercel were weak here).

  • I guess this is the case for new installs, but for existing dependencies can’t you simply pin them to a patch release, and point at the sha?

  • I use a separate dev user account (on macOS) for package installations, VSCode extensions, coding agents and various other developer activities.

    I know it's far from watertight (and it's useless if you're working with bitwarden itself), but I hope it blocks the low hanging fruit sort of attacks.

    • Check your home folder permissions on macos, last time I checked mine were world readable (until I changed them). I was very surprised by it, and only noticed when adding an new user account for my wife.

  • But how do you know which one is good? If foo package sends out an announcement that v1.4.3 was hacked, upgrade now to v1.4.4 and you're on v1.4.3, waiting a week seems like a bad idea. But if the hackers are the one sending the announcement, then you'd really want to wait the week!

    • malicious versions are recalled and removed when caught - so you don't need to update to the next version

    • An announcement isn't a quiet action. One would hope that the real maintainers would notice & take action.

  • Install tools using a package manager that performs builds as an unprivileged user account other than your own, sandboxes builds in a way that restricts network and filesystem access, and doesn't run let packages run arbitrary pre/post-install hooks by default.

    Avoid software that tries to manage its own native (external, outside the language ecosystem) dependencies or otherwise needs pre/post-install hooks to build.

    If you do packaging work, try to build packages from source code fetched directly from source control rather than relying on release tarballs or other published release artifacts. These attacks are often more effective at hiding in release tarballs, NPM releases, Docker images, etc., than they are at hiding in Git history.

    Learn how your tools actually build. Build your own containers.

    Learn how your tools actually run. Write your own CI templates.

    My team at work doesn't have super extreme or perfect security practices, but we try to be reasonably responsible. Just doing the things I outlined above has spared me from multiple supply chain attacks against tools that I use in the past few weeks.

    Platform, DevEx, and AppSec teams are all positioned well to help with stuff like this so that it doesn't all fall on individual developers. They can:

      - write and distribute CI templates
      - run caches, proxies, and artifact repositories which might create room to
        - pull through packages on a delay
        - run automated scans on updates and flag packages for risks?
        - maybe block other package sources to help prevent devs from shooting themselves in the foot with misconfiguration
      - set up shared infrastructure for CI runners that
        - use such caches/repos/proxies by default
        - sandbox the network for build$
        - help replace or containerize or sandbox builds that currently only run on bare metal on some aging Jenkins box on bare metal
      - provide docs
        - on build sandboxing tools/standards/guidelines
        - on build guidelines surrounding build tools and their behaviours (e.g., npm ci vs npm install, package version locking and pinning standards)
      - promote packaging tools for development environments and artifact builds, e.g.,
        - promote deterministic tools like Nix
        - run build servers that push to internal artifact caches to address trust assumptions in community software distributions
        - figure out when/whether/how to delegate to vendors who do these things
    

    I think there's a lot of things to do here. The hardest parts are probably organizational and social; coordination is hard and network effects are strong. But I also think that there are some basics that help a lot. And developers who serve other developers, whether they are formally security professionals or not, are generally well-positioned to make it easier to do the right thing than the sloppy thing over time.

  • The problem with cooldowns is that the more people use them, the less effective they become.

    • The hypothesis you're referring to is something like "if everyone uses a 7-day cooldown, then the malware just doesn't get discovered for 7 days?", right?

      An alternative hypothesis: what if 7-day cooldowns incentivize security scanners, researchers, and downstream packagers to race to uncover problems within an 7-day window after each release?

      Without some actual evidence, I'm not sure which of these is correct, but I'm pretty sure it's not productive to state either one of these as an accepted fact.

    • Well, luckily, those who find the malicious activity are usually companies who do this proactively (for the good of the community, and understandably also for marketing). There are several who seem to be trying to be the first to announce, and usually succeed. IMHO it should be Microsoft (as owners of GitHub, owners of npm) who should take the helm and spend the tokens to scan each new package for malicious code. It gets easier and easier to detect as models improve (also gets easier and easier to create, and try to avoid detection on the other hand)

    • That was my first instinct as well but I'm not sure how true it really is.

      Many companies exist now whose main product is supply chain vetting and scanning (this article is from one such company). They are usually the ones writing up and sharing articles like this - so the community would more than likely hear about it even if nobody was actually using the package yet.

The issue was a compromised build pipeline that shipped a poisoned package.

But PSA: If something is critical to the business and you’re using npm, pin your dependencies. I’ve had this debate with other devs throughout the years and they usually point to the lockfile as assurance, but version ranges with a ^ mean that when the lockfile gets updated, you can pull in newer versions you didn’t explicitly choose.

If what you're building can put your company out of business it's worth the hassle.

https://github.com/doy/rbw is a Rust alternative to the Bitwarden CLI. Although the Rust ecosystem is moving in NPM's direction (very large and very deep dependency trees), you still need to trust far fewer authors in your dependency tree than what is common for Javascript.

  • Well.. https://github.com/doy/rbw/blob/main/Cargo.toml#L16

    You're still pulling a lot of dependencies. At least they're pinned though.

    • > At least they're pinned though.

      Frustratingly, they're not by default though; you need to explicitly use `--locked` (or `--frozen`, which is an alias for `--locked --offline`) to avoid implicit updates. I've seen multiple teams not realize this and get confused about CI failures from it.

      The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.

      9 replies →

  • It's a bit ironic that everyone considers Rust as safer while completely ignoring the heavily increased risk of pulling in malware in dependencies.

  • Is there any downside to using the firefox builtin password manager?

    • Does it support autofill for other apps on mobile? I'd argue that putting passwords in your phone clipboard could itself be risky (although for someone who's extremely security conscious, maybe discouraging using apps isn't a downside)

      5 replies →

  • This + vaultwarden is an awesome self-hostable rust version of bitwarden. We might as well close the loop!

  • I wonder if this is going to push more software to stacks like .Net where you can do most things with zero third-party dependencies.

    Or, conversely, encourage programming languages to increase the number of features in their standard libraries.

    • A few months ago I tried to build a .NET package on Linux, and the certificate revocation checks for the dependencies didn't complete even after several minutes. Eventually I found out about the option `NUGET_CERTIFICATE_REVOCATION_MODE=offline`, which managed to cause the build to complete in a sane amount of time.

      It's hard for me to take seriously any suggestion that .NET is a model for how ecosystems should approach dependency management based on that, but I guess having an abysmal experience when there are dependencies is one way to avoid risks. (I would imagine it's probably not this bad on Windows, or else nobody would use it, but at least personally I have no interest in developing on a stack that I can't expect to work reliably out of the box Linux)

  • That’s my concern too. Rust has the same dependency concerns, which is how hackers get into code. VaultWarden has the same Rust dependency concern. Ironically we’re entering an age where C/C++ seems to have everything figured out from a dependency injection standpoint

    • Now all they need to figure out is how to actually make the C/C++ code that isn't from dependencies secure and they'll be all set

I had a really bad experience with the bitwarden cli. I believe it was `bw list` that I ran, assuming it would list the names of all my passwords, but too my surprise, it listed everything, including passwords and current totp codes. That's not the worst of it though. For some reason, when I ssh'ed into one of my servers and opened tmux, where I keep a weechat irc client running, I noticed that the entire content of the bw command was accessible from within the weechat text input field history. I have no idea how this happened, but it was quite terrifying. The issue persisted across tmux and weechat sessions, and only a reboot of the server would solve the problem.

I promptly removed the bw cli programme after that, and I definitely won't be installing it again.

I use ghostty if it matters.

  • I love how the first comment is a complain having nothing to do with the actual subjec

    • Password managers are all about trust, the main link is about a compromise, so it's not surprising that the first comment is also about trust too, even if it's not directly about this particular compromise.

      I found the default bwcli clunky and unacceptable, and it's why I don't use it, even though I still have a BitWarden subscription.

      2 replies →

    • Not to mention utter nonsense. There’s no possible way that BW CLI somehow injected command history into a remote server. That was 100% something the GP did, a bug in their terminal, or a config they have with ssh/tmux, not Bitwarden.

      7 replies →

  • I thought that CLI would be efficent when I looked for using it and then I figured it is JavaScript

    • Exactly. That is the problem.

      There is a time and place for where it makes sense and a password manager CLI written in TypeScript importing hundreds of third-party packages is a direct red flag. It is a frequent occurrence.

      We have seen it happen with Axios which is one of the biggest supply chain attacks on the Javascript / Typescript ecosystem and it makes no sense to build sensitive tools with that.

      2 replies →

  • Wow. Thats crazy. Is there an extension for bwcli in weechat? BTW I didnt even know BW had a cli until now. I use keepass locally.

    • It's crazy because it's not default bw behavior, or even any bw behavior... I don't use the cli, but I don't see any built-in capacity to copy bw output to the clipboard. (In the UNIX way, you'd normally pipe it to a clipboard utility if you wanted it copied, and then the security consequences are on you.)

      They probably caused it themselves, somehow, and then blamed bitwarden. Note in the original comment they aren't even entirely sure what the command was, and they weren't familiar with it or they wouldn't have been surprised by its output... so how can they be sure what else they did between that command and the weechat thing?

      If the terminal or tmux fed terminal history into weechat, that's also not bw's problem.

      1 reply →

Never used the CLI, but I do use their browser plugin. Would be quite a mess if that got compromised. What can I do to prevent it? Run old --tried and tested-- versions?

Quite bizarre to think much much of my well-being depends on those secrets staying secret.

  • Integration points increase the risk of compromise. For that reason, I never use the desktop browser extensions for my password manager. When password managers were starting to become popular there was one that had security issues with the browser integration so I decided to just avoid those entirely. On iOS, I'm more comfortable with the integration so I use it, but I'm wary of it.

    • On iOS I feel I have less control over what's running than on Linux (dont get me started on Windows or Android). So that's the order of how I dare to use it. But a supply chain attack: I'll always use a distributed program: the only thing I can do is only use old versions, and trusted distribution channels.

    • In theory the browser integration shouldn’t leak anything beyond the credentials being used, even if compromised.

      When you use autofill, the native application will prompt to disclose credentials to the extension. At that point, only those credentials go over the wire. Others remain inaccessible to the extension.

  • We need cooldowns everywhere, by default. Development package managers, OS package managers, browser extensions. Even auto-updates in standalone apps should implement it. Give companies like Socket time to detect malicious updates. They're good at it, but it's pointless if everyone keeps downloading packages just minutes after they're published.

  • > What can I do to prevent it?

    My two most precious digital possessions - my email and my Bitwarden account - are protected by a Yubikey that's always on my person (and another in another geographical location). I highly recommend such a setup, and it's not that much effort (I just keep my Yubikey with my house keys)

    I got a bit scared reading the title, but I'm doing all I can to be reasonably secure without devolving into paranoia.

  • Use the desktop or web vault directly, don't use the browser plugin.

    • How are they clearly less susceptible to a supply chain attack?

      Maybe the web vault, but then we do not know when it's compromised (that's the whole idea); so we trust them not to've made a mess...

  • How to prevent it?

    tl;dr

    - https://cooldowns.dev

    - https://depsguard.com

    (disclaimer: I maintain the 2nd one, if I knew of the first, I wouldn't have released it, just didn't find something at that time, they do pretty much the same thing, mine in a bit of an overkill by using rust...)

    • Do either of those work on browser extensions that I install as a user? I don't see anything relating to extensions in there.

  • You should use hunter2 as your password on all services.

    That password cannot be cracked because it will always display as ** for anyone else.

    My password is *****. See? It shows as asterisks so it's totally safe to share. Try it!

    ... Scnr •́ ‿ , •̀

> Russian locale kill switch: Exits silently if system locale begins with "ru", checking Intl.DateTimeFormat().resolvedOptions().locale and environment variables LC_ALL, LC_MESSAGES, LANGUAGE, and LANG

So bold and so cowards at the same time...

  • The worst thing is that you can't even tell if that's "real" or just a false flag.

    • Does it matter? Lots of groups do such checks at startup at this point, because every news outlet who reports on it suddenly believe the group to be Russian if you do, so it's a no brainer to add today to misdirect even a little.

      8 replies →

  • "Discretion is the better part of valor", "Never point it at your own feet", "Russian roulette is best enjoyed as a spectator", and many other sayings seem applicable.

  • That isn't a smoking gun. I think it was the Vault7 leaks which showed that the NSA and CIA deliberately leave trails like this to obfuscate which nation state did it. I'm sure other state actors do this as well, and it's not a particularly "crazy" technique.

  • ah yes, because everyone sets locale on their npm publish github CI job.

    obvious misdirection, but it does serve to make it very obvious it was a state actor.

The part that seems most important here is that npm install was enough.

Once the compromise point is preinstall, the usual "inspect after install" mindset breaks down. By then the payload has already had a chance to run.

That gets more interesting with agents / CI / ephemeral sandboxes, because short exposure windows are still enough when installs happen automatically and repeatedly.

Another thing I think is worth paying attention to: this payload did not just target secrets, it also targeted AI tooling config, and there is a real possibility that shell-profile tampering becomes a way to poison what the next coding assistant reads into context.

I work on AgentSH (https://www.agentsh.org), and we wrote up a longer take on that angle here:

https://www.canyonroad.ai/blog/the-install-was-the-attack/

  • Nobody inspects packages after install, your theory has been debunked multiple times, caring about npm install running scripts is moot when you’ll inevitably run the actual binary after install.

    And besides, you could always pull the package and inspect before running install, which unless you really know the installer and understand/know guarantees deeply (e.g., whether it’s possible for an install to deploy files outside of node_modules) it’s insane to even vaguely trust it to pull and unpack potentially malicious code.

KeePass users continue to live the stress free live.

I've managed to avoid several security breaches in last 5 years alone by using KeePass locally on my own infra.

  • I don't understand how this solves the issue in this case.

    Bitwarden vaults were not compromised, there was a problem in a tool you used to access the secrets.

    What makes it impossible for KeePass access tools to have these issues?

    • It's not impossible, but most KeePass tools are written in sane languages and built with sane tooling, and don't use trash like Javascript and npm. Of course I'm not considering browser extensions or exclusive web-clients, but the main KeePass client has a good autotype system, so you don't really need to use the browser extension.

      In any case, the fact that the official BitWarden client (which uses Electron btw) and even the CLI is written in Javascript/Typescript - should tell you everything you need to know about their coding expertise and security posture.

      1 reply →

    • > I don't understand how this solves the issue in this case.

      I'd say since it is a local only tool, you don't really need to update it constantly provided you are a sane person that don't use a browser extension. It makes it easier to audit and yourself less at risk of having your tool compromised.

      It doesn't have to be keypass though, it can be any local password management tool like pass[1] or its guis or simply a local encrypted file.

      [1] https://www.passwordstore.org/

      1 reply →

    • >What makes it impossible for KeePass access tools to have these issues?

      the superiority of keepass users scares away the bad actors

  • I need my passwords to be accessible from my infrastructure and my phone. How do you achieve this with KeePass? I assumed it was not possible, but in fairness, I haven't really gone down that rabbit hole to investigate.

    • Keepass is just a single file, you can share it between devices however you want (google drive, onedrive, dropbox, nextcloud, syncthing, rsync, ftp, etc); as long as you can read and write to it, it just works. There are keepass clients for just about everything (keepassxc for desktops, keepass2android or keepassdx for android, keepassium for iphone).

      11 replies →

    • Not op but I mean you can use a public cloud with Cryptomator on top if you don’t trust your password DB on a non E2E cloud. Or you can just use your own cloud (but then no access outside or can risk and open up infra), and then any of the well known clients on your phone. Can optionally sandbox them if possible and then just be mindful of sync conflicts with the DB file but I assume you, like most people, will 99.9% of the time be reading the DB not writing to it.

      1 reply →

    • I use MacOS and iOS for home home devices and Windows for work, and use Strongbox on the Apple side with KeePassXC on the Windows side and sync them using DropBox.

    • Someone is about hop on and tell you how they simply run a Dropbox/GDrive to host their keepass vault and how that’s good enough for me (which should be Keepass’s tagline) and mobile they use a copy or some other manually derived and dependency ridden setup. They will support ad hoc over designed because their choice of ad hoc cloud is better than a service you use.

    • I mean there are ways i.e. if you run something like tailscale and can always access your private network etc. but it is a hassle.

      Plus, now you're responsible for everything. Backups, auditing etc.

    • I use self-hosted Bitwarden (Vaultwarden) for this. It runs on my local network, and I have it installed on my phone etc. When I’m on my local network, everything works fine. When I’m not on my local network, the phone still has the credentials from the last time it was synced (i.e., last time it was used while the phone was on the home network). It’s a pretty painless way to keep things in sync without ever allowing Bitwarden to be accessible outside my home network.

    • In short, when I make a major password or credential change I do it from my laptop, consider that file on disk to be the "master" copy, and then manually sync the file on a periodic basis to my phone. I treat the file on the phone as read-only. Works fine so far.

      To date there have been zero instances when I needed to significantly change a password/service/login/credential solely from my phone and I was unable to access my laptop.

      Additionally the file gets synchronized to a workstation that sits in my home office accessible by personal VPN, where it can be accessed in a shell session with the keepass CLI: https://tracker.debian.org/pkg/kpcli

      You can use an extremely wide variety of your own choice of secure methods for how to get the file from the primary workstation (desktop/laptop) to your phone.

  • Which is great for Hacker News users that can maintain their own infra. But if we're talking "stress free", that's not an answer for the average user...

  • Ok, single file, blah, blah. Realistically how do you sync that and how do you resolve conflicts? What happens if two devices add a password while offline, then go online?

    • I actually was a Bitwarden user at first, but over time in reality the frequency that I change email/password is not that much. It's not like I change those things every hour or every day like with my work files/documents and need constant syncing to the drive. And the chance that I add/change passwords at 2 devices at a close time is even less.

      So gradually I don't feel I need syncing that much any more and switched to Keepass. I made my mind that I'll only change the database from my computer and rclone push that to any cloud I like (I'm using Koofr for that since it's friendly to rclone) then in any other devices I'll just rclone pull them after that when needed. If I change something in other devices (like phones), I'll just note locally there and change the database later.

      But ofc if someone needs to change their data/password frequently then Bitwarden is clearly the better choice.

  • the only thing I can't find to do with keepass is how back up it in the cloud, like if you encrypt your back up, then where do you save that password, then where do you save the password for the cloud provider?.

    • You save the single password in your head. All other passwords go inside Keepass.

    • Same as Bitwarden? You just need to remember Keepass password, just like remember Bitwarden password.

  • > KeePass users continue to live the stress free live.

    https://cyberpress.org/hackers-exploit-keepass-password-mana...

    • This article is borderline malicious in how it skirts the facts.

      This wasn't a case where KeePass was compromised in any way, as far as I can tell. This appears to be a basic case of a threat actor distributing a trojanized version via malicious ads. If users made sure they are getting the correct version, they were never in danger. That's not to say that a supply chain attack couldn't affect KeePass, but this article doesn't say that it has.

    • That looks like you'd have to download and run a hacked installer that was never avaliable from an official location. That is a much lower risk than a supply-chain attack where anyone building birwarden-cli from the official repo would be infected via the compromised dependency.

      Long term keepass users aren't going to be affected. If you mention software to others make sure you send them a link to a known safe download location instead of having them search for one (as new users searching like that are more at risk of stumbling on a malicious copy of the official site hosting a hacked version).

    • This AI generated article is not about vulnerabilities in KeePass, rather about malicious KeePass clones.

    • That's an AI slop article. I'm not sure how someone creating their own installer and buying a few domains to distribute it is a mark against KeePass itself.

      > The beacon established command and control over HTTPS

To use a fitting turn of phrase, "Many such cases."

How many times will this happen before people realise that updating blind is a poor decision?

> The affected package version appears to be @bitwarden/cli2026.4.0, and the malicious code was published in bw1.js, a file included in the package contents. The attack appears to have leveraged a compromised GitHub Action in Bitwarden’s CI/CD pipeline, consistent with the pattern seen across other affected repositories in this campaign.

This is precisely why I don't use BW CLI. Use pass or gopass for all your CLI tokens and sync them via a private git repo.

Keep the password manager as a separate desktop app and turn off auto update.

  • A supply chain issue that hadn’t happened to BW CLI before is exactly why you use other CLIs that seem to be identically vulnerable to the same issues?

    • That's just not true.

      The original pass is just a single shell script. It's short, pretty easy to read and likely in part because it's so simple, it's also very stable. The only real dependencies are bash, gnupg and optionally git (history/replication). These are most likely already on your machine and whatever channel you're getting them from (ex: distribution package manager) should be much more resilient to supply chain vulnerabilities.

      It can also be used with a pgp smartcard (in my case a Yubikey) so all encryption/decryption happens on the smartcard. Every attempt to decrypt a credential requires a physical button press of the yubikey, making it pretty obvious if some malware is trying to dump the contents of the password store.

Does the CLI auto-update?

Edit: The CLI itself apparently does not, which will have limited the damage a bit, but if it's installed as a snap, it might. Incidents like this should hopefully cause a rollback of this dumb system of forcefully and frequently updating people's software without explicit consent.

Also the time range provided in https://community.bitwarden.com/t/bitwarden-statement-on-che... can help with knowing if you were at risk. I only used the CLI once in the morning yesterday (ET), so I might not have been affected?

  • I think you had to have installed the CLI during that time-frame, then ran the brand new installed CLI to be vulnerable.

    Assuming you had it already installed, you would be safe.

I'm just hearing about this attack on Checkmarx.

We recently adopted it at work, and I find the thing to just produce garbage. I've never tuned out noise so quickly.

you have to appreciate the irony of a thing that's supposed to help protect you from vulnerabilities being one.

  • I think this is the real news. There seems to be an ongoing attack against Checkmarx.

    That thing is expensive as he'll and used by lots of huge corps. I know at least one very large one in Mexico ... where the IT team is pretty useless.

    So, I dont doubt the possibility that in the short future we will hear about more hacks.

I am glad I consciously decided not to put 2FA keys when I adopted bitwarden back in 2021, and manage them with Aegis. It was a bit of a hassle to setup backups, but it's good to split your points of failure.

So how likely is that these compromises will start affecting the non-cli and non-open-source tools ? For example other password managers (in the form of GUI's or browser extensions).

I recently had to disable their Chrome extension because it made the browser grind to a halt (spammed mojo IPC messages to the main thread according to a profiler). I wasn't the only one affected, going by the recent extension reviews. I wonder if it's related.

  • > CLI builds were affected [...]

    > Bitwarden’s Chrome extension, MCP server, and other legitimate distributions have not been affected yet.

Somehow thats good because the rest of the Bitwarden apps will benefit from the increased tightness of their tooling and ci/cd

Their website is also incredibly bad. I am not paying for it so it might be better for paying users.

It is mind boggling how an app that just lists a bunch of items can be so bloated.

I've dramatically decreased my reliance on third-party packages and tools in my workflow. I switched from Bitwarden to Apple Passwords a few months ago, despite its worse feature set (though the impetus was Bitwarden crashing on login on my new iPad).

I've also been preferring to roll things on my own in my side projects rather than pulling a package. I'll still use big, standalone libraries, but no more third-party shims over an API, I'll just vibe code the shim myself. If I'm going to be using vibe code either way, better it be mine than someone else's.

  • Why not stick to simple/heavily vetted password managers (like keepassx)? is there some advanced feature you use?

    • I hope you're not using KeePassX, it's been unmaintained for years. KeePassXC is only available for Linux, which means I'd need to use a third-party app for Mac and iOS, so I'd be trusting three vendors instead of one.

      Aside from passwords, I store passkeys, secure notes, and MFA tokens.

      2 replies →

I was literally thinking about installing the cli a few days ago to ease the use in a few places. Now I'm glad I didn't.

This will continue to happen more and more, until legislation is passed to require a software building code.

That's why I don't use any third-party password managers. You have to trust them not to fuck up security, updates, backups, etc. etc.

I wrote my own password generator - it's stateless, which has the advantage that I never have to back up or sync any data between devices. It just lets you enter a very long, secure master password, service name and a username then runs an scrypt hash on this with good enough parameters to make brute-force attacks unfeasible.

For anything important, I also use 2FA.

> Checkmarx is an information security company specializing in software application security testing and risk management for software supply chains.

The irony! The security "solution" is so often the weak link.

Remember how the White House published that document on memory safe languages? I think it’s time they go one step further and ban new development in JavaScript. Horrible language horrible ecosystem and horrible vulns.

  • Supply chain attacks aren't exclusive to JS just like malware isn't exclusive to Windows, it's just that JS/Windows is more popular and widespread. Kill JS and you will get supply chain attacks on the next most popular language with package managers. Kill Windows and you will get a flood of Linux/MacOS malware.

    • Maybe language based package managers aren't great. Also, npm has design decisions that make it especially prone to supply chain attacks iirc

If I run the compromised CLI, do they get all my passwords?

From my understanding the checkmarx attack could have been prevented by the asfaload project I'm working on. See https://github.com/asfaload/asfaload

It is:

- open source

- accountless(keys are identity)

- using a public git backend making it easily auditable

- easy to self host, meaning you can easily deploy it internally

- multisig, meaning event if GitHub account is breached, malevolent artifacts can be detected

- validating a download transparantly to the user, which only requires the download url, contrary to sigstore

How the hell are most people supposed to balance the risk of not updating software against the risk of updating software?

  • It's a hard decision, I would say a cooldown by default in the last few months would have prevented more attacks than not upgrading to the latest version due to an immediate RCE, zero-click, EPSS 100%, CVSS 10.0, KEV mentioned Zero Day CVE. But now that the Mythos 90 days disclosure window gets closer, I don't know what tsunami of urgent patches is in our way... it's not an easy problem to solve.

    I lean toward cooldown by default, and bypass it when an actual reachable exploitable ZeroDay CVE is released.

I mean, what's the future now? Everyone just vibecoding their own private tools that no "foreign government" has access to? It honestly feels like everything is slowly starting to collapse.

Also didn't Microsoft (the owner of GitHub) got access to Claude Mythos in order to "seCuRe cRitiCal SoftWaRe InfRasTructUre FoR teh AI eRa"? Hows securing GitHub Action going for them?

> THE MOST TRUSTED PASSWORD MANAGER

> Defend against hackers and data breaches

> Fix at-risk passwords and stay safe online with Bitwarden, the best password manager for securely managing and sharing sensitive information.

yep. literally from their website this moment..and the link to their "statement"[0] is nowhere on the front page.

Oh wait, there is a top banner..."Take insights to action: Bitwarden Access Intelligence now available Learn more >" nope.

[0]: https://community.bitwarden.com/t/bitwarden-statement-on-che...

Once again, it is in the NPM ecosystem. OneCLI [0] does not save you either. Happens less with languages that have better standard libraries such as Go.

If you see any package that has hundreds of libraries, that increases the risk of a supply chain attack.

A password manager does not need a CLI tool.

[0] https://news.ycombinator.com/item?id=47585838

  • I guess anyone/anything using a non-graphical interface should just not use a password manager for some reason?

    Not to mention that a graphical application is just as vulnerable to supply chain attacks.

  • I seems like we need better standard libraries, but standard libraries turn into tarpits. I sort of like the way python's stdlib works.

  • > A password manager does not need a CLI tool.

    That's a wild statement. The CLI is just another UI.

    The problem in this case is JS and the NPM ecosystem. Go would be an improvement, but complexity is the enemy of security. Something like (pass)age is my preference for storing sensitive data.