Comment by shevy-java

1 day ago

> What customers are clamoring for that feature? If the law cares, the law has tools to inquire.

How can they distinguish from real people exploited to AI models autogenerating everything?

I mean right now this is possible, largely because a lot of the AI videos have shortcomings. But imagine in 5 years from now on ...

> How can they distinguish from real people exploited to AI models autogenerating everything?

Watermarking by compliant models doesn't help this much because (1) models without watermarking exist and can continue to be developed (especially if absence of a watermark is treated as a sign of authenticity), so you cannot rely on AI fakery being watermarked, and (2) AI models can be used for video-to-video generation without changing much of the source, so you can't rely on something accurately watermarked as "AI-generated" not being based in actual exploitation.

Now, if the watermarking includes provenance information, and you require certain types of content to be watermarked not just as AI using a known watermarking system, but by a registered AI provider with regulated input data safety guardrails and/or retention requirements, and be traceable to a registered user, and...

Well, then it does something when it is present, largely by creating a new content gatekeepiing cartel.

> How can they distinguish from real people exploited to AI models autogenerating everything?

The people who care don't consume content which even just plausibly looks like real people exploited. They wouldn't consume the content even if you pinky promised that the exploited looking people are not real people. Even if you digitally signed that promise.

The people who don't care don't care.