Comment by __MatrixMan__
2 years ago
Delivering a message to a client which is known to be less secure than the sender expected it to be is unforgivable.
Refusing to deliver is inconvenient.
2 years ago
Delivering a message to a client which is known to be less secure than the sender expected it to be is unforgivable.
Refusing to deliver is inconvenient.
> Delivering a message to a client which is known to be less secure than the sender expected it to be is unforgivable.
That is inconsistent with the threat model of a messaging system!
Inherently, a messaging system will deliver a plaintext copy of the message to the recipient(s). Wouldn't be much of a messaging system otherwise.
Once you sent something and it was delivered in plaintext to the recipient, the information disclosure risk is completely out of your control (and out of control of the application in use). The recipient is free to leak it however they wish.
If you don't trust the recipient to keep it private, don't send it.
> That is inconsistent with the threat model of a messaging system!
I disagree, the worst thing that a messaging system that aims to be "private" can do is to actually not be private. Sending to a known-insecure client is a violation of, like, the one thing signal claims to do.
> If you don't trust the recipient to keep it private, don't send it.
My threat model is some combination of "third party actors who I don't trust" and "second parties who I trust but who are non-experts"[1]. I would like Signal to protect me from the first (by not delivering things to known-insecure clients that can be middlemanned or otherwise discovered) and the second, by having privacy-respecting and mistake-preventing defaults. Things like disappearing messages and such. Keeping my trusted-but-nonexpert peers from making mistakes that can harm either of us in the future is a key part of my threat model.
For example, disappearing messages prevent me from being harmed by my friend, who I trust to discuss things with, not having a lockscreen password and getting warrented by the police. An outdated or third party client that lets you keep them forever, even if well intentioned, can break that aspect of the threat model. And yes, a peer who is actually nefarious can still do that, but that's not my threat model. I think my friends aren't privacy-experts, I don't think they're feds.
[1]: This is, for example, the reason that I think PGP is not a good tool. Even if I do everything right, a well meaning peer who is using the PGP application can unintentionally leak my plaintext when they don't mean to, because of the tool's sharp edges.
But you don't know, at the time of sending, which version of the client will show up to retrieve it. Otherwise both clients would need to be connected at the same time before you were allowed to send.
Just curious, since I'm not really active in this space, but wouldn't the threat model of most concern be that an external actor breaks (maybe an outdated version of) the app or protocol? This would leak data without you or the recipient being any the wiser. It seems like that's the threat the app-expiry policy is intended to address.
You could update the protocol version if and when a protocol weakness is discovered and then stop talking the previous protocol version after a transition period.
No need to continuously expire apps in the absence of a protocol breach.
1 reply →
If the app has to be updated on a 90 day schedule, then it's likely that most of those updates aren't making anything more secure. So it's not "known" that someone running last quarter's version is less secure than the sender expects.
I think this is the tradeoff that Signal makes versus the messenger most similar to it, WhatsApp. Though of course everyone in a group chat must pick one or the other, so it's not much of a free choice. (My friend group in the bay area is entirely on Signal, for example, though I also have a WhatsApp account.)