← Back to context

Comment by drvdevd

4 years ago

Since you worked on an actual contract catching these sorts of people you are perhaps in a unique position to answer the question: will this sort of blanket surveillance technique in general but also in iOS specifically - actually work to help catch them?

I have direct knowledge of examples of where individuals were arrested and convicted of sharing CP online and they were identified because a previous employer I worked for used PhotoDNA analysis on all user uploaded images. So yeah, this type of thing can catch bad people. I’m still not convinced Apple doing this is a good thing, especially on private media content without a warrant, even though the technology can help catch criminals.

  • now im afraid, i have two young children < 5 years old. i have occasionally took pictures of them naked with some bumps on the skin or mosquito bite and sent them to my wife over whatsapp to look at and decide do we need to send them to doctor, do i have to fear now that i will be marked as distributing CP.

    • It’s not just you. I have pictures of my kids playing in the bath. No genitals are in shot and it’s just kids innocently playing with bubbles. The photos aren’t even shared but they’d still get scanned by this tool.

      This kind of thing isn’t even unusual either. I know my parents have pictures of myself and my siblings playing in the bath (obviously taken on film rather than digital photography) and I know friends have pictures of their kids too.

      While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

      15 replies →

    • I don't have knowledge of how Apple is using this, but based on what I know about how it's used at Google this would be flagging previously reviewed images. That wouldn't include your family photos, but are generally hash-type matches of images circulating online. The images would need to depict actual abuse of a child to be CSAM.

    • You would only be flagged if the photos (' hashes) were added as part of some investigation right? So you only have to fear for your criminal record in the event that an actual criminal gets ahold of your indecent (in their hands!) photographs. In which eventuality you might be (relatively to not, but still leaked) glad they'd been discovered and arrested etc. assuming your good name could be cleared.

      Just playing devil's advocate, my gut (and I think even considered) reaction is in alignment with surely just about the whole tech industry: it's over-reach (if they're not public images).

    • That is not how Apple's feature works, there is no way for it to flag those images.

    • Legally you are a producer of CP and strictly liable.

      Your intent when producing the image was irrelevant.

  • Look at all the recent findings that have come to light regarding ShotSpotter law enforcement abuse [1] These systems, along with other image and object recognition projects are rife for false positives, bias, and garbage-in-garbage-out. They should in no way be considered trustworthy for criminal accusations let alone arrest.

    As mentioned in the twitter thread, how does image hashing & recognition tools such as PhotoDNA handle adversarial attacks?[2][3]

    [1] https://towardsdatascience.com/black-box-attacks-on-perceptu...

Just as being banned from one social media platform for bad behavior pushes people to a different social media platform, this might very well push the exactly wrong sort of people from iOS to Android.

If Android then implements something similar, they have the option to simply run different software, as Android lets you run whatever you want so long as you sign the wavier.

"You're using Android?! What do you have to hide?" -- Apple ad in 2030, possibly

I'm the person you're responding to, and I think so? My contract was on data that wasn't surveilled, it was willingly supplied in bad faith. Fake names, etc. And there was cause / outside evidence to look into it. I can't really go into more details than that, but it wasn't for an intelligence agency. It was for another party that wanted to hand something over to the police after they found out what was happening.

  • I see. I was responding to you, yes. And in this case I was more curious about your opinion - based on your previous knowledge - on the viability of Apple’s technology here, rather than the specific details of your work.

    In my (uninformed) opinion - this looks like more of a bad faith move on Apples part that will maybe catch some bad actors but will be a net harmful result for apple’s users and society, as expressed in the Twitter thread.

    Others who responded here though also seem to think it’ll be a viable technique.