> Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
> That’s the message they’re sending to governments, competing services, China, you.
Is it? That’s just something the tweets have read in.
The message could equally well be ‘We won’t become an easy political target by ignoring a problem something most people care about like child porn, but we are going to build a point solution to that problem, so the public doesn’t force us to bow government surveillance requests.’
It’s easy to nod along with an anti-Apple slogan, but we need to consider what would happen if they didn’t do this.
If Apple thought this kind of dragnet was a losing political fight that tells me they've become too weak to stand up to unreasonable government demands. Where is the company that won the public opinion battle over unlocking a mass shooter's phone?
False positives, what if someone can poison the set of hashes, engineered collisions, etc. And what happens when you come up positive - does the local sheriff just get a warrant and SWAT you at that point? Is the detection of a hash prosecutable? Is it enough to get your teeth kicked in, or get you informally labeled a pedo by your local police? On the flip side, since it's running on the client, could actual pedophiles use it to mutate their images until they can evade the hashing algorithm?
False positives are clearly astronomically unlikely. Not a real issue.
Engineered collisions seem unlikely too. Not impossible. Unless there is a straight up cryptographic defect in the hash algorithm, it seems hard to see how engineered collisions could be made to happen at any scale.
At Apple scale, a once in a million issue is going to ruin the lives of 2000 people. A false positive here is not a mild inconvenience. It means police raiding their house, potentially damaging it, seizing all of their technology for months while it is analyzed, and leaving these people highly stressed while they try to put their lives back together.
This isn't some web tech startup where a mistake means someones tshirt got sent to the wrong address. Peoples lives will quite literally be ruined over mistakes here.
If this was the kind of hash where flipping one bit of the input completely scrambles the output, the bad guys would just flip one bit of the input to evade it. Obviously a PhotoDna type of hash is going to be be easier to cause a collision with because they're averaging out a ton of the input data. According to Wikipedia the classic way to do it is convert it to monochrome, divide it into a grid, and average the shade of each of the cells. If they're doing that you could probably just pass in that intermediate grid and it would "hash" to the same result as the original picture with no porn present.
Why do you think that? There are plenty of whitepapers on fooling NNs by changing random pixels by a bit, so that the picture is not meaningfully changed for a person, but the computer will label it very differently. Do note that these are not cryptographic hashes because they have to recognize the picture even when compressed differently, cropped a bit, etc.
The hashes will of course be provided by local governments, who have the ultimate authority (because they can forbid Apple to sell there, and Tim Cook never says no to money).
To quote another tweet from Matthew Green, the author of the Twitter thread (https://twitter.com/matthew_d_green/status/14231103447303495...):
> Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
> That’s the message they’re sending to governments, competing services, China, you.
Is it? That’s just something the tweets have read in.
The message could equally well be ‘We won’t become an easy political target by ignoring a problem something most people care about like child porn, but we are going to build a point solution to that problem, so the public doesn’t force us to bow government surveillance requests.’
It’s easy to nod along with an anti-Apple slogan, but we need to consider what would happen if they didn’t do this.
If Apple thought this kind of dragnet was a losing political fight that tells me they've become too weak to stand up to unreasonable government demands. Where is the company that won the public opinion battle over unlocking a mass shooter's phone?
3 replies →
False positives, what if someone can poison the set of hashes, engineered collisions, etc. And what happens when you come up positive - does the local sheriff just get a warrant and SWAT you at that point? Is the detection of a hash prosecutable? Is it enough to get your teeth kicked in, or get you informally labeled a pedo by your local police? On the flip side, since it's running on the client, could actual pedophiles use it to mutate their images until they can evade the hashing algorithm?
False positives are clearly astronomically unlikely. Not a real issue.
Engineered collisions seem unlikely too. Not impossible. Unless there is a straight up cryptographic defect in the hash algorithm, it seems hard to see how engineered collisions could be made to happen at any scale.
At Apple scale, a once in a million issue is going to ruin the lives of 2000 people. A false positive here is not a mild inconvenience. It means police raiding their house, potentially damaging it, seizing all of their technology for months while it is analyzed, and leaving these people highly stressed while they try to put their lives back together.
This isn't some web tech startup where a mistake means someones tshirt got sent to the wrong address. Peoples lives will quite literally be ruined over mistakes here.
1 reply →
If this was the kind of hash where flipping one bit of the input completely scrambles the output, the bad guys would just flip one bit of the input to evade it. Obviously a PhotoDna type of hash is going to be be easier to cause a collision with because they're averaging out a ton of the input data. According to Wikipedia the classic way to do it is convert it to monochrome, divide it into a grid, and average the shade of each of the cells. If they're doing that you could probably just pass in that intermediate grid and it would "hash" to the same result as the original picture with no porn present.
Why do you think that? There are plenty of whitepapers on fooling NNs by changing random pixels by a bit, so that the picture is not meaningfully changed for a person, but the computer will label it very differently. Do note that these are not cryptographic hashes because they have to recognize the picture even when compressed differently, cropped a bit, etc.
1 reply →
Perceptual hashes aren't cryptographic. The Twitter thread has examples of engineered collisions for a simpler perceptual hash.
5 replies →
A country can collect a list of people sharing any content they put on a hash list.
Like gay porn, 'save Khashoggi' meme, or a photo from documentary about missing Uighurs.
It's hard to imagine how this could be misused, right?
That seems like a real problem, and of course it could be misused, however nothing so far revealed actually tells us whether it is possible.
E.g. how the hashes are computed, where they come from, and what happens when a positive match is detected.
Until we have a clear understanding of these things, the rest is just speculation.
The hashes will of course be provided by local governments, who have the ultimate authority (because they can forbid Apple to sell there, and Tim Cook never says no to money).
1 reply →