Comment by pornel
4 years ago
I see it as a huge risk too.
If the algorithm and the blocklists leaked, then not only it would be possible to develop tools that reliably modify CSAM to avoid detection, but also generate new innocent-looking images that are caught by the filter. That could be used to overwhelm law enforcement with false positives and also weaponized for SWAT-ing.
Fortunately, it seems that matching is split between client-side and server-side, so extraction of the database from the device will not easily enable generation of matching images.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...