Comment by danaris
3 months ago
If it's AI-generated, it is fundamentally not CSAM.
The reason we shifted to the terminology "CSAM", away from "child pornography", is specifically to indicate that it is Child Sexual Abuse Material: that is, an actual child was sexually abused to make it.
You can call it child porn if you really want, but do not call something that never involved the abuse of a real, living, flesh-and-blood child "CSAM". (Or "CSEM"—"Exploitation" rather than "Abuse"—which is used in some circles.) This includes drawings, CG animations, written descriptions, videos where such acts are simulated with a consenting (or, tbh, non consenting—it can be horrific, illegal, and unquestionably sexual assault without being CSAM) adult, as well as anything AI-generated.
These kinds of distinctions in terminology are important, and yes I will die on this hill.
This is a rather US-centric perspective. Under US law, there is a legal distinction between child pornography and child obscenity (see e.g. 18 U.S.C. § 1466A, “obscene visual representations of the sexual abuse of children”). The first is clearly CSAM; whether the second is, is open to dispute. But, in Canada, UK (and many other European countries), Australia, New Zealand, that legal distinction doesn’t exist, both categories are subsumed under child pornography (or equivalent terms-Australian law now prefers the phrase “child abuse material”), and the authorities in those jurisdictions aren’t going to say some child pornography is CSAM and the rest isn’t - they are going to say it is all CSAM
You are speaking of legality, not terminology.
The terminology is universal, as it is simply for talking about What People Are Doing, not What The Law Says.
Many people will—and do, and this is why I'm taking pains to point it out—confuse and conflate CSAM and child pornography, and also the terminology and the law. That doesn't change anything about what I've said.
Fundamentally, there are two basic reasons we outlaw or otherwise vilify these things:
1) Because the creation of CSAM involves the actual sexual abuse of actual children, which causes actual harm.
2) Because we think that child pornography is icky.
Only the former has a basis in fundamental and universal principles. The latter is, effectively, attempting to police a thoughtcrime. Lots of places do attempt to police thoughtcrime, in various different ways (though they rarely think of it as such); that does not change the fact that this is what they are doing.
> The terminology is universal
Is it? The US Department of Homeland Security defines "CSAM" as including generative AI images: https://www.dhs.gov/sites/default/files/2024-04/24_0408_k2p_...
So does the FBI: https://www.ic3.gov/PSA/2024/PSA240329
You want to define the "CSAM" more narrowly, so as to exclude those images.
I'm not aware of any "official" definition, but arguably something hosted on a US federal government website is "more official" than the opinion of a HN commenter
3 replies →
I think the one case where I'd disagree is when it's a depiction of an actual person - say, someone creates pornography (be it AI-generated, drawn, CG-animated, etc.) depicting a person who actually exists in the real world, and not just some invented character. That's certainly a case where it'd cross into actual CSAM/CSEM, because despite the child not physically being abused/exploited in the way depicted in the work, such a defamatory use of the child's likeness would constitute psychological abuse/exploitation.
That would only apply if the child is exposed to it, either directly or indirectly—which, if it's distributed publicly, is a possibility, though far from a certainty.
I would also say that there's enough difference between being sexually abused, in person, and having someone make a fake image of that, that it's at least questionable to apply the term.
I would further note that part of the reason to use the term CSAM is to emphasize that there is an actual child in actual danger that may need help.
> That would only apply if the child is exposed to it
Not just the child, but anyone associated with the child. Classmates sharing it around school and gossiping about it, overbearing parents punishing the child for something the child didn't even do, predators identifying the child and seeking to turn the fictional images into reality... there are a lot of plausible angles for a fictional representation of a real person to produce tangible psychological or even physical harm, just by the mere existence of that representation.
It's in a similar vein to so-called "revenge porn". Nobody was harmed in the creation of it (assuming that the persons in it consented to being in it), and yet the dissemination of it has clear negative impacts on those who did not consent to said dissemination.
That all being to say:
> I would further note that part of the reason to use the term CSAM is to emphasize that there is an actual child in actual danger that may need help.
Creating pornographic works depicting a child who actually exists in the real world does indeed put that actual child in actual danger. That's why it'd be appropriate to call such works "CSAM".
This is where my technical knowledge of genAI breaks down, but wouldn't an image generator be unable to produce such imagery unless honest-to-god CSAM were used in the training of it?
It's like the early demo for DALL-E where you could get "an armchair in the shape of an avocado", which presumably wasn't in the training set, but enough was in it to generalize the "armchair" and "avocado" concepts and combine them.
It's possible for the model to take disparate concepts and put them together. E.g. you can train a LORA to teach stable diffusion what a cowboy hat it is, then ask for Dracula in a cowboy hat.that probably doesn't exist in it's training data, but it will give it to you just fine. I'm not about to try, but I would assume the same would apply for child pornography.
Not at all. If it's trained with images of children, and images of pornography, it should be pretty easy for it to combine the two.
[flagged]