← Back to context

Comment by necovek

16 hours ago

I wonder how common are setups where an internal person has access to the TLS private key part of the certificate or access to a network equipment that all traffic passes through, yet they cannot access the inputs required for hashing/encryption client-side?

This seems to mostly prevent accidental logging and is thus a matter of defense in depth, stopping malicious actors from exploiting it later — but an actively malicious IT person would not be deterred.

> This seems to mostly prevent accidental logging

Yes, and that's not uncommon, IME. There's generally a lot of logging that's at least potentially available, and it gets turned on, and the logs shared when there's a problem that needs to be fixed (especially when it needs to be fixed quickly, which is usual).

This is going to make more sense for "enterprise"-type deployments, where there's a significant distinction between the people who might have access to request logs at times, and the people who can push code to production.

Yes limited protection against insiders is good defense in depth but not the primary purpose which is to protect end user accounts on other services in the event that you are breached.

  • My question still stands: how do you disallow cleartext password extraction if you are breached, assuming all your IT infrastructure and code is now accessible to an attacker?

    I am talking about not logging them ever, using internal TLS and strong hashing in general, and wondering what exact value is added on top with client side hashing.

    • There are substantial differences between database access, snooping the logs, internal (no TLS) wiretap, and full MITM of the frontend.

      Hashing client side minimizes the risk of any blast radius exceeding the bounds of your own service. There's obviously no way to prevent an adversary who achieves full MITM from gradually harvesting credentials over time. The only solution there is to use keys instead of passwords.

      4 replies →