Comment by jcalvinowens
1 day ago
> Even linux could do that if they were compelled to.
An open source project absolutely cannot do that without your consent if you build your client from the source. That's my point.
1 day ago
> Even linux could do that if they were compelled to.
An open source project absolutely cannot do that without your consent if you build your client from the source. That's my point.
This is a wildly unrealistic viewpoint. This would assume that you somehow know the language of the client you’re building and have total knowledge over the entire codebase and can easily spot any sort of security issues or backdoors, assuming you’re using software that you yourself didn’t make (and even then).
This also completely disregards the history of vulnerability incidents like XZ Utils, the infected NPM packages of the month, and even for example CVEs that have been found to exist in Linux (a project with thousands of people working on it) for over a decade.
You're conflating two orthogonal threat models here.
Threat model A: I want to be secure against a government agency in my country using the ordinary judicial process to order engineers employed in my country to make technical modifications to products I use in order to spy on me specifically. Predicated on the (untrue in my personal case) idea that my life will be endangered if the government obtains my data.
Threat model B: I want to be secure against all nation state actors in the world who might ever try to surreptitiously backdoor any open source project that has ever existed.
I'm talking about threat model A. You're describing threat model B, and I don't disagree with you that fighting that is more or less futile.
Many open source projects are controlled by people who do not live in the US and are not US citizens. Someone in the US is completely immune to threat model A when they use those open source projects and build them directly from the source.
Wait I'm sorry do you build linux from source and review all code changes?
You missed the important part:
> For this threat model
We're talking about a hypothetical scenario where a state actor getting the information encrypted by the E2E encryption puts your life or freedom in danger.
If that's you, yes, you absolutely shouldn't trust US corporations, and you should absolutely be auditing the source code. I seriously doubt that's you though, and it's certainly not me.
The sub-title from the original forbes article (linked in the first paragraph of TFA):
> But companies like Apple and Meta set up their systems so such a privacy violation isn’t possible.
...is completely utterly false. The journalist swallowed the marketing whole.
Okay, so yes I grant your point that people where governments are the threat model should be auditing source code.
I also grant that many things are possible (where the journalist says "isn't possible").
However, what remains true is that Microsoft appears to store this data in a manner that can be retrieved through "simple" warrants and legal processes, compared to Apple where these encryption keys are stored in a manner that would require code changes to accomplish.
These are fundamentally different in a legal framework and while it doesn't make Apple the most perfect amazing company ever, it shames Microsoft for not putting in the technical work to accomplish these basic barriers to retrieving data.
2 replies →