Comment by fingerlocks
4 years ago
You can’t do a byte-for-byte hash on images because a slight resize or minor edit will dramatically change the hash, without really modifying the image in a meaningful way.
But image hashes are “perceptual” in the sense that the hash changes proportionally with the image. This is how reverse image searching works, and why it works so well.
Sure, I get how it works, but I feel like false positives are inevitable with this approach. That wouldn't necessarily be an issue under normal police circumstances where they have a warrant and a real person reviews things, but it feels really dangerous here. As I mentioned, any accusations along these lines have a habit of sticking, regardless of reality - indeed, irrational FUD around the Big Three (terrorism, paedophilia and organised crime) is the only reason Apple are getting a pass for this.
There is also a number of flagged pictures to reach before an individual is actually classified as a "positive" match.
It is claimed that the chance of being a false-positive for a positive match is one out of a trillion.
> Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive.
https://techcrunch.com/2021/08/05/apple-icloud-photos-scanni...