← Back to context

Comment by commandersaki

16 days ago

The post also said that now phoning home isn’t an alarm that Apple could subvert the Photos app by passing a hash of the photo (presumably sensitive data). My contention is that Apple could do that for virtually any app that talks to the mothership, and is not unique to Photos.

Which is why I point the dangers of accepting this behavior as normal. I'm assuming you mean they could siphon the hashes of my photos through any other channel (e.g. even when calling the mothership to check for updates), but this is not entirely true. For example, were I to take a million photos, such traffic would suspiciously increase proportionally.

If you accept that every photo captured will send traffic to the mothership, like the story here, then that is no longer something you can check, either.

In any case, as others have mentioned, no one cares. In fact, I could argue that the scenario I'm forecasting is exactly what has already happened: the photos app suddenly started sending opaque blobs for every photo captured. A paranoid guy noticed this traffic and asked Apple about it. Apple replied with a flimsy justification, but users then go to ridiculous extremes to justify that this is not Apple spying on them, but a new super-secret-magic-sauce that cannot possibly be used to exfiltrate their data, despite the fact that Apple has provided exactly 0 verifiable assurances about it (and in fact has no way to do so). And the paranoid guy will no longer be able to notice extra per-photo traffic in the future.

  • I don't understand these conspiracies, why would Apple put so much thought & effort into implementing security & privacy measures, so much as participating in CFRG and submitting RFCs, publishing papers, technical articles, etc. only to maliciously subvert it. If and when they do, they WILL get caught out, and they will lose something valuable that they hold, goodwill. This is a good case to apply Occam's razor.

    • They _get_ caught (e.g. this, CSAM, etc.). People have ridiculously short memory spans. And in the meanwhile Apple gets to benefit from "privacy first" advertisements even though the actual privacy improvements are unclear if anything.

      One example of this effect is how during the CSAM scandal some people were under the wrong impression that Apple was the first to do on-device image classification. Actually they were close to the last to do it. Even Samsung (not well known for their privacy) was doing it locally. But this didn't prevent Apple from full-page advertisements claiming so.

      Or Apple selling Secure Boot, Remote Attestation, etc. as technologies for "user" privacy when 20 years ago Microsoft out of all companies tried the same thing (remember Palladium) and was correctly and universally panned for it. What makes Apple so different? They're even more likely than MS to subvert these technologies in a "tie-users-to-my-hardware" way.

      Whenever Apple has the opportunity to take simple, risk-free, actual privacy solutions (such as, well, allowing you to _skip their servers altogether_) they often take the complicated, trivially bypass-able approach, and claim it is because for user friendlyness. This is intentional: a complicated approach allows you to claim "sorry, implementation error!" whenever there is an issue, and avoid the appearance of maliciousness.

      1 reply →