Comment by disiplus

4 years ago

now im afraid, i have two young children < 5 years old. i have occasionally took pictures of them naked with some bumps on the skin or mosquito bite and sent them to my wife over whatsapp to look at and decide do we need to send them to doctor, do i have to fear now that i will be marked as distributing CP.

It’s not just you. I have pictures of my kids playing in the bath. No genitals are in shot and it’s just kids innocently playing with bubbles. The photos aren’t even shared but they’d still get scanned by this tool.

This kind of thing isn’t even unusual either. I know my parents have pictures of myself and my siblings playing in the bath (obviously taken on film rather than digital photography) and I know friends have pictures of their kids too.

While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

  • > No genitals are in shot

    That you even have to consider sexual interpretations of your BABY'S GENITALS is an affront to me. I have pictures of my baby completely naked, because it is, and I stress this, A BABY. They play naked all the time, it's completely normal.

    • Yeah that’s a fair point. The only reason I was careful was just in case those photos got leaked and taken out of context. Which is a bloody depressing thing to consider when innocently taking pictures of your own family :(

      2 replies →

    • Don't immediately take affront, take the best possible interpretation of the parent comment. This is about automatic scanning of people's photo libraries in the context of searching for child pornography, presumably through some kind of ML. It seems to me that the concern of the commenter is that if there are photos of their child's genitals that they'll be questioned about creating child pornography, not that they're squeamish about photographing their child's genitals. This happened in 1995 in the UK: https://www.independent.co.uk/news/julia-somerville-defends-...

      1 reply →

    • Indeed, I'm guessing this must be some cultural shift that was successfully implanted in some cultures because I too find the idea completely bonkers.

      2 replies →

  • > While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

    In this case it’s not AI that’s understanding the nuance, it’s authorities that identify the exact pictures they want to track and then this tool lets them identify what phones/accounts have that photo (or presumably took it). If ‘AI’ is used here it is to detect if one photo contains all/part of another photo, rather than to determine if the photo is abusive or not.

    Although there is a legitimate slippery slope argument to be had here.

    • Is there some way of verifying that the fingerprints in this database will never match sensitive documents on their way from a whistleblower to journalists, or anything else that isn't strictly illegal? How will this tech be repurposed over time once it's in place?

      3 replies →

  • > While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.

    I recall a story several years ago where someone was getting the film developed at a local drugstore, and the employee reported them for CP because of bath photos. This was definitely a thing before computers with normal every day humans.

I don't have knowledge of how Apple is using this, but based on what I know about how it's used at Google this would be flagging previously reviewed images. That wouldn't include your family photos, but are generally hash-type matches of images circulating online. The images would need to depict actual abuse of a child to be CSAM.

You would only be flagged if the photos (' hashes) were added as part of some investigation right? So you only have to fear for your criminal record in the event that an actual criminal gets ahold of your indecent (in their hands!) photographs. In which eventuality you might be (relatively to not, but still leaked) glad they'd been discovered and arrested etc. assuming your good name could be cleared.

Just playing devil's advocate, my gut (and I think even considered) reaction is in alignment with surely just about the whole tech industry: it's over-reach (if they're not public images).

That is not how Apple's feature works, there is no way for it to flag those images.

Legally you are a producer of CP and strictly liable.

Your intent when producing the image was irrelevant.