← Back to context

Comment by chabes

10 hours ago

That consent of portrayed parties is impossible.

What is the solution there?

Shouldn’t it be possible for AI to filter out that a request is made to portray a real person? That seems almost like a trivial task for a good model. I am sure every now and then something will slip through, but I bet one could make it very close to 100% effective.

  • Consider the difference between "Generate an image of Emma Watson", "Generate an image of Hermione", and "Generate an image of a female hogwarts witch and student". We're getting less and less specific, but those are all likely to get you an image of Emma Watson.

    Your filter has to pick out that, while they did not ask for a specific person, the practical result is likely to be the same. That's going to be tough to get near perfect.

  • I can see how it'd be trivial to block known celebrities, but how do you handle everyone else?

    • I mean a realistic take is to simply not use source images containing people at all.

      AIs have been able to invent fictional people longer then they've been able to modify existing images.

  • AI development has become an excuse for ignoring consent. Of course it's possible to filter out requests. But culturally with X, it's not remotely likely, unless compelled by regulation with teeth.

You can just forbid using existing images as a source and describe them purely by text.