Comment by Closi
4 years ago
> While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
In this case it’s not AI that’s understanding the nuance, it’s authorities that identify the exact pictures they want to track and then this tool lets them identify what phones/accounts have that photo (or presumably took it). If ‘AI’ is used here it is to detect if one photo contains all/part of another photo, rather than to determine if the photo is abusive or not.
Although there is a legitimate slippery slope argument to be had here.
Is there some way of verifying that the fingerprints in this database will never match sensitive documents on their way from a whistleblower to journalists, or anything else that isn't strictly illegal? How will this tech be repurposed over time once it's in place?
You seem to be suggesting that the AI will go directly from scanning your photos for incriminating fingerprints to reporting you to journalists.
I have to assume humans are involved at some point before journalists are notified. The false-positive will be cleared up and no reputations sullied (except perhaps the reputation of using AI to scan for digital fingerprints).
>The false-positive will be cleared up and no reputations sullied...
This is dangerously naive. The US justice system alone will hound people on goosed up charges and try to get people to accept a plea deal and write a bogus confession. Parallel construction. Additionally if you can't audit the database (I'd bet very few people can, including your senator) how do you know a hash of something not CP wasn't inserted into the database. This entire system screams ready for govt overreach. It's worse than normal since there'll be no public evidence when it's abused.
The other way around. If the database of fingerprints is unauditable, and especially if the database varies from country to country, then it would be very easy to add fingerprints for classified documents, or photos documenting known war crimes, or even just copyrighted stuff to close the so-called analog hole.
Documents could also be engineered to trigger false positives, making it difficult or impossible for a corporate whistleblower to photograph incriminating evidence to deliver to the authorities.
So, if the rumors are true and every iPhone will check every photo against an opaque database of perceptual fingerprints, what safeguards exist (beyond "trust us" from the database keepers) to prevent abuse of the feature to suppress evidence and control the flow of information, and which organizations or governments will have control over the contents of the database? As always, who watches the watchers?