Comment by switch007
4 years ago
> It's interesting how many users in this thread are instinctively siding with the offenders in this, and not the victims.
That is infantile. Painting people advocating privacy as siding with offenders is highly insulting.
This is a situation where different people's privacy is in conflict. What's infantile is claiming sole ownership of privacy advocacy while so-whating the worst privacy violation imaginable, from the victims' perspective.
That's an interesting point. However, I'm not sure victim privacy is the reason for CSAM regulations. Rather, it's reducing the creation of CSAM by discouraging its exchange. For example, suppose instead of deleting/reporting the images, Apple would detect and modify the images with Deepfake so the victim is no longer identifiable. That would protect the victim's privacy but wouldn't reduce the creation or exchange. The fact that such a proposal is ridiculous suggests that privacy isn't the reason for regulation and that reducing creation and exchange is.
There is an utterly perverse incentive to consider as well.
If the median shelf-life of abuse evidence is shortened, in that the item in question can no longer be forwarded/viewed/stored/..., what does that imply in a world where the demand remains relatively stable?
I despise the abusers for what they do, and the ecosystem they enable. But I also remember first having this argument more than ten years ago. If you, as a member of law enforcement or a child wellbeing charity, only flag the awful content but do not do anything else about it, you are - in my mind - guilty of criminal neglect. The ability to add an entry to a database is nothing more than going, "at least nobody else will see that in the future". That does NOTHING to prevent the creation of more such material, and thus implicitly endorses the ongoing abuse and crimes against children.
Every one of these images and videos is a piece of evidence. Of a horrifying crime committed against a child or children.
The two sides proposed by your argument are only logically valid opposites if you can logically/mathematically guarantee that this technology will only ever be used for detecting photos depicting blatant and obvious sex abuse. Since you cannot, the entire argument is void. I'm not siding with abusers, I simply want arbitrary spies staying the hell away from my computers.