← Back to context

Comment by matthewdgreen

4 years ago

I never said the device presages full-device content scanning. All I’ve said (including in this NYT op-ed [0]) is that it enables full-device scanning. Apple’s decision to condition scanning on a toggle switch is a policy decision and not a technical restriction as it was in the past with server-side scanning. Server-side scanning cannot scan data you don’t upload, nor can it scan E2EE files. Most people agree that Apple will likely enable E2EE for iCloud in the reasonably-near future, so this isn’t some radical hypothetical — and the new system is manifestly different from server-side scanning in such a regime.

Regarding which content governments want Apple to scan for, we already have some idea of that. The original open letter from US AG William Barr and peers in 2018 [1] that started this debate (and more than arguably led to Apple’s announcement of this system) does not only reference CSAM. It also references terrorist content and “foreign adversaries’ attempts to undermine democratic values and institutions.” A number of providers already scan for “extremist content” [2], so while I can’t prove statements about Apple’s intentions in the future I can only point you to the working systems operating today as evidence that such applications exist and are being used. Governments have asked for these, will continue to ask for them, and Apple has already partially capitulated by building this client-side CSAM system. That should be an important data point, but you have to be open to considering such evidence as an indication of risk rather than intentionally rejecting it and demanding proof of the worst future outcomes.

Apple has also made an opinionated decision not only to scan shared photos, but also to scan entire photo libraries that are unshared with users. This isn’t entirely without precedent, but it’s a specific deployment decision that is inconsistent with existing deployments at other providers such as Dropbox [3] where scanning is (allegedly, according to scanning advocates) not done on upload, but on sharing. Law enforcement and advocates have consistently asked for broader scanning access, including unshared files. Apple’s deployment responds to that request in a way that their existing detection systems (and many industry standard systems) did not. Apple could easily have restricted their scans to shared albums and photos as a means to block distribution of CSAM: they did not. This is yet another difference.

I’m not sure how to respond to your requests for certainty and proof around future actions that might be taken by a secretive company. This demand for an unobtainable standard of evidence seems like an excellent way to “win” an argument on HN, but it is not an effective or reasonable standard to apply to an unprecedented new system that will instantly affect ~1 billion customers of the most popular device manufacturer in the world. There is context here that you are missing, and I think suggesting more reasonable standards of evidence would be more convincing than your demands for unobtainable proof of Apple’s future intentions.

[0] https://www.google.com/amp/s/www.nytimes.com/2021/08/11/opin...

[1] https://www.justice.gov/opa/press-release/file/1207081/downl...

[2] https://www.google.com/amp/s/amp.theguardian.com/technology/...

[3] see p.8: https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/6593...

> Apple’s decision to condition scanning on a toggle switch is a policy decision and not a technical restriction as it was in the past with server-side scanning.

This is not a meaningful distinction. There are many security and privacy protections of iOS that are equivalently "policy" decisions: letting iCloud Backups be turned off; not sending a copy of your device passcode to Apple servers; not MITMing iMessage which has no key transparency; not using existing Photos intelligence to detect terrorist content etc. In technical terms, there are many paths to full device scanning, and some of those paths were well trodden even a month ago (iCloud Backup and Spotlight, for starters, and Photos Intelligence as a direct comparison).

Making this claim also requires showing that the likelihood of Apple making one of many undesirable policy decisions has changed.

> Regarding which content governments want Apple to scan for, we already have some idea of that.

I asked about what Apple will scan for, not what governments want them to scan for. Again, I see a pattern in your argument where you state what government wants and then don't state how that desideratum translates into what Apple builds. The latter is entirely the source of ambiguity for me.

> Apple’s deployment responds to that request in a way that their existing detection systems (and industry standard systems) did not.

I could see that. If that's what Apple had built, would you have a different take on the system? It seems like no -- most of the risks you care about are unchanged since you are operating in a world of "policy decision" equivalence classes.

> I’m not sure how to respond to your requests for certainty and proof around future actions that might be taken by a secretive company.

You seem to misunderstand what I said. I'm asking for an estimate of your certainty, not absolute certainty. Furthermore, I'm asking you to provide examples of what information would change your mind for the same reason you keep repeatedly calling me stubborn and accusing me of bad faith: without it, I have no idea whether we are engaged in discussion or a shouting match.