Comment by dwaite
1 month ago
Only in a very broad sense that they use HE to prevent the server from seeing what happened.
The CSAM tech detected matches against particular photos captured by law enforcement, and provided external evidence of the match (e.g. enough positive matches reconstructed a private key). It was not meant to do topical matches (e.g. arbitrary child in a bathtub), and had some protections to make it significantly harder to manufacture false positives, e.g. noise manipulated in kitten photos to cause them to meet the threshold to match some known image in the dataset.
This gives a statistical likelihood of matching a cropped image of a landmark-like object against known landmarks, based on sets of photos of each landmark (like "this is probably the Eiffel Tower"), and that likelihood is only able to be seen by the phone. There's also significantly less risk about around abuse prevention (someone making a kitten photo come up as 'The Great Wall of China')
No comments yet
Contribute on Hacker News ↗