Comment by jsheard
16 hours ago
It is known that the LAION dataset underpinning foundation models like Stable Diffusion contained at least a few thousand instances of real-life CSAM at one point. I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.
https://www.theverge.com/2023/12/20/24009418/generative-ai-i...
> I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.
I'd be hard pressed to prove that you definitely hadn't killed anybody ever.
Legally if it's asserted that these images are criminal because they are the result of being the product of an LLM trained on sources that contained CSAM then the requirement would be to prove that assertion.
With text and speech you could prompt the model to exactly reproduce a Sarah Silverman monologue and assert that proves her content was used in the training set, etc.
Here the defense would ask the prosecution to demonstrate how to extract a copy of original CSAM.
But your point is well taken, it's likely most image generation programs of this nature have been fed at least one image that was borderline jailbait and likely at least one that was well below the line.
Framing it in that way is essentially a get out of jail free card - anyone caught with CSAM can claim it was AI generated by a "clean" model, and how would the prosecution ever be able to prove that it wasn't?
I get where you are coming from but it doesn't seem actionable in any way that doesn't effectively legalize CSAM possession, so I think courts will have no choice but to put the burden of proof on the accused. If you play with fire then you'd better have the receipts.
This seems like a long way of saying “guilty until proven innocent”.
2 replies →
> Legally if it's asserted that these images are criminal because they are the result of being the product of an LLM trained on sources that contained CSAM then the requirement would be to prove that assertion.
Legally, possession of CSAM is against the law because there is an assumption that possession proves contribution to market demand, with an understanding that demand incentives production of supply, meaning there that with demand children will be harmed again to produce more content to satisfy the demand. In other words, the intent is to stop future harm. This is why people have been prosecuted for things like suggestive cartoons that have no real-life events behind them. It is not illegal on the grounds of past events. The actual abuse is illegal on its own standing.
The provenance of the imagery is irrelevant. What you need to prove is that your desire to have such imagery won't stimulate yourself or others to create new content with real people. If you could somehow prove that LLM content will satisfy all future thirst, problem solved! That would be world changing.
I'm somewhat sympathetic to that argument. However, it doesn't stop there.
Violent video games prove contribution to market demand for FPS-style videos of mass shootings or carjackings, so can/should we ban Call of Duty and Grand Theft Auto now?
(Note that the "market demand" argument is subtly different from the argument that the games directly cause people to become more violent, either in general or by encouraging specific copycat violence. Studies on [lack of] direct violence causation are weak and disputed.)
3 replies →
Then all image generation models should be considered inherently harmful, no?
But this is the dream for the supposed protectors of children. You see, just because child porn production stops, does not mean those children disappear. Usually, of course, they go into youth services (in practice most don't even make it to the front door and run away to resume the child abuse, but let's ignore that). That is how the situation of those children changes when CSAM is persecuted. From the situation they were in, to whatever situation exists in youth services. In other words, youth services is the upper limit to how much police and anyone CAN help those children.
So you'd think they would make youth services a good place to be for a child, right. After all, if that situation were to be only marginally better than child prostitution, there's no point to finding CSAM. Or at least, the point is not to protect children, since that is simply not what they're doing.
So how is youth services doing these days? Well ... NOT good. Regularly children run away from youth services to start doing child porn (ie. live off off an onlyfans account). There's a netflix series on the subject ("Young and locked up") which eventually, reluctantly shows the real problem, the outcome (ie. prison or street poverty).
In other words your argument doesn't really apply since the goal is not to improve children's well being. If that was the goal, these programs would do entirely different things.
Goals differ. There's people who go into government with the express purpose to "moralize" and arrest people for offenses. Obviously, to them it's the arresting part that's important, now how serious the offense was and CERTAINLY not if their actions actually help people. And then there's people who simply want a well-paying long-term job where they don't accomplish much. Ironically these are much less damaging, but they still seek to justify their own existence.
Both groups really, really, really want ALL image generation models to be considered inherently harmful, as you say.
I think you'd be hard-pressed to prove that a few thousand images (out of over 5 billion in the case of that particular data set) had any meaningful effect on the final model capabilities.