← Back to context

Comment by flossposse

3 months ago

Green, (the author), makes an important point: > a technical guarantee is different from a user promise. [...] End-to-end encrypted messaging systems are intended to deliver data securely. They don’t dictate what happens to it next.

Then Green seems to immediately forget the point they just made, and proceed to talk about PCC as if it were something other than just another technical guarantee. PCC only helps to increase confidence that the software running on the server is the software Apple intended to be there. It doesn't give me any guarantees about where else my data might be transferred from there, or whether Apple will only use it for purposes I'm okay with. PCC makes Apple less vulnerable to hacks, but doesn't make them any more transparent or accountable. In fact, to the extent that some hackers hack for pro-social purposes like exposing corporate abuse, increased security also serves as a better shield against accountability. Of course, I'm not suggesting that we should do away with security to achieve transparency. I am, however, suggesting that transparency, moreso than security, is the major unaddressed problem here. I'd even go so far as to say that the woeful state of security is enabled in no small part by lack of transparency. If we want AI to serve society, then we must reverse the extreme information imbalance we currently inhabit wherein every detail of each person's life is exposed to the service provider, but the service provider is a complete black-box to the user. You want good corporate actors? Don't let them operate invisibly. You want ethical tech? Don't let it operate invisibly.

(Edit: formatting)