← Back to context

Comment by jsjohnst

4 years ago

I have direct knowledge of examples of where individuals were arrested and convicted of sharing CP online and they were identified because a previous employer I worked for used PhotoDNA analysis on all user uploaded images. So yeah, this type of thing can catch bad people. I’m still not convinced Apple doing this is a good thing, especially on private media content without a warrant, even though the technology can help catch criminals.

now im afraid, i have two young children < 5 years old. i have occasionally took pictures of them naked with some bumps on the skin or mosquito bite and sent them to my wife over whatsapp to look at and decide do we need to send them to doctor, do i have to fear now that i will be marked as distributing CP.

  • It’s not just you. I have pictures of my kids playing in the bath. No genitals are in shot and it’s just kids innocently playing with bubbles. The photos aren’t even shared but they’d still get scanned by this tool.

    This kind of thing isn’t even unusual either. I know my parents have pictures of myself and my siblings playing in the bath (obviously taken on film rather than digital photography) and I know friends have pictures of their kids too.

    While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

    • > No genitals are in shot

      That you even have to consider sexual interpretations of your BABY'S GENITALS is an affront to me. I have pictures of my baby completely naked, because it is, and I stress this, A BABY. They play naked all the time, it's completely normal.

      8 replies →

    • > While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

      In this case it’s not AI that’s understanding the nuance, it’s authorities that identify the exact pictures they want to track and then this tool lets them identify what phones/accounts have that photo (or presumably took it). If ‘AI’ is used here it is to detect if one photo contains all/part of another photo, rather than to determine if the photo is abusive or not.

      Although there is a legitimate slippery slope argument to be had here.

      4 replies →

    • > While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

      I recall a story several years ago where someone was getting the film developed at a local drugstore, and the employee reported them for CP because of bath photos. This was definitely a thing before computers with normal every day humans.

  • I don't have knowledge of how Apple is using this, but based on what I know about how it's used at Google this would be flagging previously reviewed images. That wouldn't include your family photos, but are generally hash-type matches of images circulating online. The images would need to depict actual abuse of a child to be CSAM.

  • You would only be flagged if the photos (' hashes) were added as part of some investigation right? So you only have to fear for your criminal record in the event that an actual criminal gets ahold of your indecent (in their hands!) photographs. In which eventuality you might be (relatively to not, but still leaked) glad they'd been discovered and arrested etc. assuming your good name could be cleared.

    Just playing devil's advocate, my gut (and I think even considered) reaction is in alignment with surely just about the whole tech industry: it's over-reach (if they're not public images).

  • That is not how Apple's feature works, there is no way for it to flag those images.

  • Legally you are a producer of CP and strictly liable.

    Your intent when producing the image was irrelevant.

Look at all the recent findings that have come to light regarding ShotSpotter law enforcement abuse [1] These systems, along with other image and object recognition projects are rife for false positives, bias, and garbage-in-garbage-out. They should in no way be considered trustworthy for criminal accusations let alone arrest.

As mentioned in the twitter thread, how does image hashing & recognition tools such as PhotoDNA handle adversarial attacks?[2][3]

[1] https://towardsdatascience.com/black-box-attacks-on-perceptu...