Comment by jsheard
5 hours ago
It is known that the LAION dataset underpinning foundation models like Stable Diffusion contained at least a few thousand instances of real-life CSAM at one point. I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.
https://www.theverge.com/2023/12/20/24009418/generative-ai-i...
> I think you would be hard-pressed to prove that any model trained on internet scrapes definitively wasn't trained on any CSAM whatsoever.
I'd be hard pressed to prove that you definitely hadn't killed anybody ever.
Legally if it's asserted that these images are criminal because they are the result of being the product of an LLM trained on sources that contained CSAM then the requirement would be to prove that assertion.
With text and speech you could prompt the model to exactly reproduce a Sarah Silverman monologue and assert that proves her content was used in the training set, etc.
Here the defense would ask the prosecution to demonstrate how to extract a copy of original CSAM.
But your point is well taken, it's likely most image generation programs of this nature have been fed at least one image that was borderline jailbait and likely at least one that was well below the line.
> Legally if it's asserted that these images are criminal because they are the result of being the product of an LLM trained on sources that contained CSAM then the requirement would be to prove that assertion.
Legally, possession of CSAM is against the law because there is an assumption that possession proves contribution to market demand, with an understanding that demand incentives production of supply, meaning there that with demand children will be harmed again to produce more content to satisfy the demand. In other words, the intent is to stop future harm. This is why people have been prosecuted for things like suggestive cartoons that have no real-life events behind them. It is not illegal on the grounds of past events. The actual abuse is illegal on its own standing.
The provenance of the imagery is irrelevant. What you need to prove is that your desire to have such imagery won't stimulate yourself or others to create new content with real people. If you could somehow prove that LLM content will satisfy all future thirst, problem solved! That would be world changing.
I'm somewhat sympathetic to that argument. However, it doesn't stop there.
Violent video games prove contribution to market demand for FPS-style videos of mass shootings or carjackings, so can/should we ban Call of Duty and Grand Theft Auto now?
(Note that the "market demand" argument is subtly different from the argument that the games directly cause people to become more violent, either in general or by encouraging specific copycat violence. Studies on [lack of] direct violence causation are weak and disputed.)
1 reply →
Framing it in that way is essentially a get out of jail free card - anyone caught with CSAM can claim it was AI generated by a "clean" model, and how would the prosecution ever be able to prove that it wasn't?
I get where you are coming from but it doesn't seem actionable in any way that doesn't effectively legalize CSAM possession, so I think courts will have no choice but to put the burden of proof on the accused. If you play with fire then you'd better have the receipts.
This seems like a long way of saying “guilty until proven innocent”.
1 reply →
Then all image generation models should be considered inherently harmful, no?
I think you'd be hard-pressed to prove that a few thousand images (out of over 5 billion in the case of that particular data set) had any meaningful effect on the final model capabilities.