Comment by mrshadowgoose
1 month ago
Is this just a smokescreen around slowly sneaking CSAM scanning back in after the pushback last time? The "default on" behavior is suspect.
[1] https://www.wired.com/story/apple-photo-scanning-csam-commun...
1 month ago
Is this just a smokescreen around slowly sneaking CSAM scanning back in after the pushback last time? The "default on" behavior is suspect.
[1] https://www.wired.com/story/apple-photo-scanning-csam-commun...
My thoughts exactly: "we've got this crafty image fingerprinting, the CSAM detection use proved too controversial to roll out, but let's get the core flows into something that sounds useful for users, so the code atays alive, improving, & ready for future expansion."
Whether such fingerprinting can reliably be limited to public "landmarks" is an interesting question, dependent on unclear implementation details.
Even if the user-visible search is limited to 'landmarks', does the process pre-create (even if only on-device) fingerprints of many other things as well? If so, it suddenly becomes possible for briefly-active non-persistent malware to instantly find images of interest without the wider access & additional processing it'd otherwise take.
> let's get the core flows into something that sounds useful for users
is it even that?
I don't see the benefit of this whatsoever
The search feature is useful at times, and while local processing is good enough to find (some of the) photos I've taken that match a search term like "table", it can't currently find a photo from a search term of "specific neighbourhood in my city" or "name of specific mountain I climbed years ago" - so if by processing on their servers allows them to do that then it would be genuinely beneficial.
But not beneficial enough to make up for the loss of privacy, so I've disabled it without finding out how useful or not the functionality is.
Yup, this is their way of injecting the "phone home" element via an innocuous rationale, "location matching". The global index will of course also match against other markers they deem worthy of matching, even if they don't return that to the user.
But wouldn't the homomorphic encryption prevent Apple's servers from knowing if there was a match or not?
The server must know what it's matching at some point, to be able to generate a response:
> The server identifies the relevant shard based on the index in the client query and uses HE to compute the embedding similarity in this encrypted space. The encrypted scores and set of corresponding metadata (such as landmark names) for candidate landmarks are then returned to the client.
Even with the server supposedly not knowing the identity of the client, the response could simply include extra metadata like some flag that then triggers an instant send of that photo to Apple's (or law enforcement's) servers unencrypted. Who knows?
[0] https://machinelearning.apple.com/research/homomorphic-encry..., during the period of generating
8 replies →
not if you need to access from multiple devices (otherwise, what's the point of this feature?)
in that case it's the source of common key of "the same account" becomes the threat
and now you have to trust... megacorporation with closed-garden ecosystem... to not access its own servers in your place?
1 reply →
Honestly, why the hell would Apple bother with such a contrived and machiavellian strategy to spy on their users?
They literally own the code to iOS. If they wanted to covertly track their customers, they could just have their devices phone home with whatever data they wanted to collect. Realistically there would be no way to know if this was actually happening, because modern devices emit so much encrypted data anyway, it wouldn’t be hard to hide some nefarious in all the noise.
Time Cook isn’t some Bond villain, sitting in a giant chair, stroking a white cat, plotting to take over the world by lulling everyone into a false sense of privacy (I mean Zuckerburg already did that). Apple is just a large giant corporation that wants to make money, and is pretty damn open about that fact. They clearly think that they can make more money by doubling down on more privacy, but that doesn’t work if you don’t actually provide the privacy, because ultimately, people are really crap at keeping secrets, especially when a media group would happily pay for a story, even at Apple.
Yeah, that sorta already exists. If you've ever done remote customer support, they can send a query to remotely-view your screen -- a query which you have to accept or deny. There's really zero reason there couldn't be a similar feature, but without asking you, and without putting a big red bar a the top of your screen that says "Apple Support is viewing your screen". Wish I had a screenshot or photo of that, can't seem to find a screenshot online unfortunately.
Exactly like how Microsoft "backed off" Recall. Uuuuuntil they shoved it back in and made it undeleteable.
By removing it from the market, making enormous technical tweaks based on user feedback, and then putting it back on the market.
Yes my thoughts as well. The tech was so expensive I guess that they had a need to test / run it to proof it’s private? I mean the model to find landmarks in your photos could run locally as well or? Ok I’m not 100% sure here.
I assume that the model couldn’t run locally for some reason. Probably either uses too much power or needs too much memory.
No, it is not. Whatever their other failings, Apple doesn’t think that way.
The cynical reason: consider that you can’t plan engineering features of this scale without written documentation, which will always surface in court.
The prima facie reason: Apple genuinely wants to provide useful features requiring server participation.
This is incredibly naive and curiously defensive.
If this was a feature on its own then it would not be popular.
Citing national security, some danger will justify its existence.
Apple alone does not control and dictate what goes in, once you reach their level of size and wealth that exceed even developed countries, you ultimately cannot be the controller of your destiny purely as a profit orientated corporation.
ex) Meta, Microsoft, Google
Very likely yes. Why else would they add a feature that incurs costs for them as an update, at no cost to the users (and not even make a fuss about it)?
It is obvious they are monetizing this feature somehow. Could be as innocuous as them training their AI dataset, or feeding into their growing ad business (locations and other things identified in the photos), or collaboration with law enforcement for various purposes (such as notifying the CCP about people's Winnie-the-Pooh memes), or a lot more ominous things.
> Very likely yes. Why else would they add a feature that incurs costs for them as an update, at no cost to the users (and not even make a fuss about it)?
Erm, you’re aware of the whole Apple intelligence thing right? An entire product that costs Apple money, provided at “no cost” to the user (if you had an iPhone 15). Also every feature in an OS update has a costs associated with it, and iOS updates have cost money for the best part of a decade now.
Has it occurred to you that reason Apple includes new features in their updates is to provide customers with more reasons to buy more iPhones? Just because feature are provided at “no cost” at point of consumption, doesn’t mean Apple won’t make money in the long run, and selling user data isn’t the only way to monetise these features. Companies have been giving out “freebies” for centuries before the internet existed, and the possibility of large scale data collection and trading was even imaginable.
Of course, but those would be features in the product code deployed on users' devices (one time investment), not a service that has ongoing costs of operation associated with each call. They wouldn't just give this out for free (especially to older iPhones), it makes no business sense. If you're not paying for a product, you are the product!
That whole incident was so misinformed.
CSAM scanning takes place on the cloud with all the major players. It only has hashes for the worst of the worst stuff out there.
What Apple (and others do) is allow the file to be scanned unencrypted on the server.
What the feature Apple wanted to add was scan the files on the device and flag anything that gets a match.
That file in question would be able to be decrypted on the server and checked by a human. For everything else it was encrypted in a way it cannot be looked at.
If you had icloud disabled it could do nothing.
The intent was to protect data, children and reduce the amount of processing done on the server end to analyse everything.
Everyone lost their mind yet it was clearly laid out in the papers Apple released on it.
Apple sells their products in oppressive regimes which force them to implement region specific features. E.g. China has their own iCloud, presumeably so it can be easily snooped on.
If they were to add this anti-CSAM feature, it is not unreasonable to think that Apple would be forced to add non-CSAM stuff to the database in these countries, e.g. anything against a local dictatorship/ etc. Adding the feature would only catch the low hanging CSAM fruit, at the cost of great privacy and probably human life. If it was going to stop CSAM once and for all, it could possibly be justified, but that's not the case.
If China can force Apple to do that stuff, then it can do that regardless of whether or not they add this feature.
Apple and others already scan peoples pictures/videos for this stuff, so your argument can be applied to what it is now.
Apples suggestion would have meant your data would be more protected as even they would not have been able to unencrypt your data.
"It only has hashes for the worst of the worst stuff out there." [citation needed]
I know someone whose MS account was permabanned because they had photos of their own kid in the bathtub. I mean, I guess the person could have been lying, but I doubt they would even have been talking about it if the truth was less innocuous.
Sure, and they do that because Microsoft's CSAM detection product (which other providers like Google supposedly use) operates by having unencrypted data access to your files in the cloud.
What Apple wanted to do is do those operations using homomorphic encryption and threshold key release so that the data was checked while still encrypted, and only after having a certain number of high likelihood matches would the possibility exist to see the encrypted data.
So the optimistic perspective was that it was a solid win against the current state of the industry (cloud accounts storing information unencrypted so that CSAM products can analyze data), while the pessimistic perspective was that your phone was now acting as a snitch on your behavior (slippery slope etc.)
3 replies →
> [citation needed]
It is actually detailed in Apples paper. Also:
https://www.interpol.int/en/Crimes/Crimes-against-children/I...
It works by generating a hash on known materials. Those hashes are shared with other companies so they can find that material without having to see the horrific stuff. The chance of a hash collision was also detailed in the paper which is so low to be non-existent. Even if a clash occurs a human still reviews the materials, and it normally needs a couple of hits to trigger an audit (again according to apples paper on it).
> I know someone whose MS account was permabanned because they had photos of their own kid in the bathtub
So you ask me for a citation and then give me anecdotal evidence?
Even if that happened it has nothing to do with CSAM.
I can't believe how uninformed, angry, and still willing to argue about it people were over this. The whole point was a very reasonable compromise between a legal requirement to scan photos and keeping photos end-to-end encrypted for the user. You can say the scanning requirement is wrong, there's plenty of arguments for that. But Apple went so above and beyond to try to keep photo content private and provide E2E encryption while still trying to follow the spirit of the law. No other big tech company even bothers, and somehow Apple is the outrage target.
> a legal requirement to scan photos
Can you provide documentation demonstrating this requirement in the United States? It is widely understood that no such requirement exists.
There's no need to compromise with any requirement, this was entirely voluntary on Apple's part. That's why people were upset.
> I can't believe how uninformed
Oh the irony.
2 replies →
> a legal requirement to scan photos
There is absolutely no such legal requirement. If there were one it would constitute an unlawful search.
The reason the provider scanning is lawful at all is because the provider has inspected material voluntarily handed over to them, and through their own lawful access to the customer material has independently and without the direction of the government discovered what they believe to be unlawful material.
The cryptographic functionality in Apple's system was not there to protect the user's prviacy, the cryptographic function instead protected apple and their datasources from accountability by concealing the fingerprints that would cause user's private data to be exposed.
There isn’t a law that requires them to proactively scan photos. That is why they could turn the feature back off.
1 reply →
> What the feature Apple wanted to add was scan the files on the device and flag anything that gets a match.
This is not the revelation you think it is. Critics understood this perfectly.
People simply did not want their devices scanning their content against some opaque uninspectable government-controlled list that might send you to jail in the case of a match.
More generally, people usually want their devices working for their personal interests only, and not some opaque government purpose.
From my understanding, it didn't scan all of the files on the device, just the files that were getting uploaded to Apple's iCloud. It was set up to scan the photos on the device because the files were encrypted before they were sent to the cloud and Apple couldn't access the contents but still wanted to try to make sure that their cloud wasn't storing anything that matched various hashes for bad content.
If you never uploaded those files to the cloud, the scanning wouldn't catch any files that are only local.
5 replies →
> People simply did not want their devices scanning their content against some opaque uninspectable government-controlled list that might send you to jail in the case of a match.
Again I feel like many people just didn't read/understand the paper.
As it stands now all your files/videos are scanned on all major Cloud companies.
Even if you get a hit on the database the hash doesn't put you in jail. The illegal materials do and a human reviews that before making a case.
That technology of perceptional hashes could have failed in numerous ways, ruining lives of law-abiding users along the way.
The chance of a hash colliding is near 0%. The hashes are for some of the worst content out there, its not trying to detect anything else.
Even so a human is in the loop to review what got a hit. Which is exactly currently happens now.
6 replies →
Yes this is better than upload the entire photo. Just like virus scan can be done entirely on device, can flagging be local?. If homeomorphic encryption allows similarity matching, does not seem entirely private. Can people be matched?
> The intent was to protect data, children and reduce the amount of processing done on the server end to analyse everything.
If it’s for the children, then giving up our civil liberties is a small price to pay. I’d also like to give up liberties in the name of “terrorism”.
When we willingly give up our rights out of fear, these evil people have won.
> If it’s for the children, then giving up our civil liberties is a small price to pay.
All your pictures and videos are currently scanned. What civil liberty did their approach change in that?
> Everyone lost their mind yet it was clearly laid out in the papers Apple released on it.
And people working with CSAM and databases of CSAM have said it was a very bad idea.
Citation needed. As the latest news suggests the opposite.
3 replies →