← Back to context

Comment by Hizonner

4 hours ago

The word "revictimizing" seems like an even bigger stretch. Assuming the output images don't actually look like them personally (and they won't), how exactly are they more victimized than anybody else in the training data? Those other people's likenesses are also "being used to generate AI CSAM images into perpetuity"... in a sort of attenuated way that's hard to even find if you're not desperately trying to come up with something.

The cold fact is that people want to outlaw this stuff because they find it icky. Since they know it's not socially acceptable (quite yet) to say that, they tend to cast about wildly until they find something to say that sort of sounds like somebody is actually harmed. They don't think critically about it once they land on a "justification". You're not supposed to think critically about it either.