Only in a very broad sense that they use HE to prevent the server from seeing what happened.
The CSAM tech detected matches against particular photos captured by law enforcement, and provided external evidence of the match (e.g. enough positive matches reconstructed a private key). It was not meant to do topical matches (e.g. arbitrary child in a bathtub), and had some protections to make it significantly harder to manufacture false positives, e.g. noise manipulated in kitten photos to cause them to meet the threshold to match some known image in the dataset.
This gives a statistical likelihood of matching a cropped image of a landmark-like object against known landmarks, based on sets of photos of each landmark (like "this is probably the Eiffel Tower"), and that likelihood is only able to be seen by the phone. There's also significantly less risk about around abuse prevention (someone making a kitten photo come up as 'The Great Wall of China')
As pointed out in a sibling comment, the result set is also encrypted, so matches with abuse images, even if there are some in Apple's POI database, can't be used to implement the scheme as you suggest.
Only in a very broad sense that they use HE to prevent the server from seeing what happened.
The CSAM tech detected matches against particular photos captured by law enforcement, and provided external evidence of the match (e.g. enough positive matches reconstructed a private key). It was not meant to do topical matches (e.g. arbitrary child in a bathtub), and had some protections to make it significantly harder to manufacture false positives, e.g. noise manipulated in kitten photos to cause them to meet the threshold to match some known image in the dataset.
This gives a statistical likelihood of matching a cropped image of a landmark-like object against known landmarks, based on sets of photos of each landmark (like "this is probably the Eiffel Tower"), and that likelihood is only able to be seen by the phone. There's also significantly less risk about around abuse prevention (someone making a kitten photo come up as 'The Great Wall of China')
As pointed out in a sibling comment, the result set is also encrypted, so matches with abuse images, even if there are some in Apple's POI database, can't be used to implement the scheme as you suggest.