A lot of people are now struggling to detect which images are AI generated, and inferring reality from illusions.
To an extent, this was already the case with many other things, including stuff that was expressly labelled as fiction, but I recall an old quote, fooling all of the people some of the time and some of the people all of the time, it is now easier to fool more people all the time and to fool all people an increasing fraction of the time.
This isn't only limited to fake pics of kids, but kids are weak and struggle to defend themselves, and in this context the tools faking them seems to me likely to increase rates of harm against them.
Is that caption going to be written as text in the image, as people will just collect/share the image without a caption that was part of a webpage/PDF the image was originally embedded.
The history of age of consent laws including Pitcairn Island, the observed results of sexualised deepfakes in classrooms by other students, and the observation that according to sexual therapists "fetishisation" is the development of a sexual response and conversion into a requirement over the course of repeated exposure rather than any innate tendency that a person is born with.
Now the implications of letting people generate pictures of children....... Do I need to say more? Even then, I'm not sure my opinion on this. No one is getting hurt by the generation of the images, but they "might could maybe possibly" cause them to act on things in real life.
When I was a teenager I used to make this argument for legalization of drugs. It wasn't the drugs that caused people to steal and murder, it was the human.
Now that I'm older, I can imagine consequences of a few bad apples pointing to AI as the starting point.
A lot of people are now struggling to detect which images are AI generated, and inferring reality from illusions.
To an extent, this was already the case with many other things, including stuff that was expressly labelled as fiction, but I recall an old quote, fooling all of the people some of the time and some of the people all of the time, it is now easier to fool more people all the time and to fool all people an increasing fraction of the time.
This isn't only limited to fake pics of kids, but kids are weak and struggle to defend themselves, and in this context the tools faking them seems to me likely to increase rates of harm against them.
How about adding a caption saying "the parts of this picture marked with a red outline have been generated by AI"?
Is that caption going to be written as text in the image, as people will just collect/share the image without a caption that was part of a webpage/PDF the image was originally embedded.
Dunno. https://www.explainxkcd.com/wiki/index.php/598:_Porn
in this context the tools faking them seems to me likely to increase rates of harm against them.
Why does it seem this way to you?
The history of age of consent laws including Pitcairn Island, the observed results of sexualised deepfakes in classrooms by other students, and the observation that according to sexual therapists "fetishisation" is the development of a sexual response and conversion into a requirement over the course of repeated exposure rather than any innate tendency that a person is born with.
2 replies →
I'm on your team here, no one is being hurt.
Now the implications of letting people generate pictures of children....... Do I need to say more? Even then, I'm not sure my opinion on this. No one is getting hurt by the generation of the images, but they "might could maybe possibly" cause them to act on things in real life.
When I was a teenager I used to make this argument for legalization of drugs. It wasn't the drugs that caused people to steal and murder, it was the human.
Now that I'm older, I can imagine consequences of a few bad apples pointing to AI as the starting point.
Yes, why are violent movies allowed if this is the argument?