← Back to context

Comment by Barrin92

9 hours ago

as long as client side encryption has been audited, which to my understanding is the case, it doesn't matter. That is literally the point of encryption, communication across adversarial channels. Unless you think Facebook has broken the laws of mathematics it's impossible for them to decrypt the content of messages without the users private keys.

Well the thing is, the key exfiltration code would probably reside outside the TCB. Not particularly hard to have some function grab the signing keys, and send them to the server. Then you can impersonate as the user in MITM. That exfiltration is one-time and it's quite hard to recover from.

I'd much rather not have blind faith on WhatsApp doing the right thing, and instead just use Signal so I can verify myself it's key management is doing only what it should.

Speculating over the correctness of E2EE implementation isn't productive, considering the metadata leak we know Meta takes full advantage of, is enough reason to stick proper platforms like Signal.

  • > That exfiltration is one-time and it's quite hard to recover from.

    Not quite true with Signal's double ratchet though, right? Because keys are routinely getting rolled, you have to continuously exfiltrate the new keys.

    • No I said signing keys. If you're doing MITM all the time because there's no alternative path to route ciphertexts, you get to generate all those double-ratchet keys. And then you have a separate ratchet for the other peer in the opposite direction.

      Last time I checked, by default, WhatsApp features no fingerprint change warnings by default, so users will not even notice if you MITM them. The attack I described is for situations where the two users would enable non-blocking key change warnings and try to compare the fingerprints.

      Not saying this attack happens by any means. Just that this is theoretically possible, and leaves the smallest trail. Which is why it helps that you can verify on Signal it's not exfiltrating your identity keys.

  • Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now? Unless there is some remote code execution flow going on.

    • WhatsApp performs dynamic code loading from memory, GrapheneOS detects it when you open the app, and blocking this causes the app to crash during startup. So we know that static analysis of the APK is not giving us the whole picture of what actually executes.

      This DCL could be fetching some forward_to_NSA() function from a server and registering it to be called on every outgoing message. It would be trivial to hide in tcpdumps, best approach would be tracing with Frida and looking at syscalls to attempt to isolate what is actually being loaded, but it is also trivial for apps to detect they are being debugged and conditionally avoid loading the incriminating code in this instance. This code would only run in environments where the interested parties are sure there is no chance of detection, which is enough of the endpoints that even if you personally can set off the anti-tracing conditions without falling foul of whatever attestation Meta likely have going on, everyone you text will be participating unknowingly in the dragnet anyway.

      3 replies →

    • >Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now?

      Yeah I'd imagine it would have been found by know. Then again, who knows when they'd add it, and if some future update removes it. Google isn't scanning every line for every version. I prefer to eliminate this kind of 5D-guesswork categorically, and just use FOSS messaging apps.

The issue is what the client app does with the information after it is decrypted. As Snowden remarked after he released his trove, encryption works, and it's not like the NSA or anyone else has some super secret decoder ring. The problem is endpoint security is borderline atrocious and an obvious achilles heel - the information has to be decoded in order to display it to the end user, so that's a much easier attack vector than trying to break the encryption itself.

So the point other commenters are making is that you can verify all you want that the encryption is robust and secure, but that doesn't mean the app can't just send a copy of the info to a server somewhere after it has been decoded.