You can trivially enforce that at the AI provider level, which covers 99% of the problem the law is designed to address.
Of course it doesn't cover the issue of foreign state psyop operations but the fact that enforcing laws against organized crime and adversary state actors is hard isn't specific to AI.
Are you not aware of open-weights models and local generation? I think the vast majority of deepfake content is being genned in basements on RTX cards, not on public providers. People already have all this content, and have archives of it, and can run it airgapped. Cat is out of bag.
You don't have to prove anything? You just have to mark the outputs of your slop generator appropriately. "Proving" one way or another is their problem when it comes to enforcement.
How would you prove that something was generated by AI yet did not include a watermark?
You can trivially enforce that at the AI provider level, which covers 99% of the problem the law is designed to address.
Of course it doesn't cover the issue of foreign state psyop operations but the fact that enforcing laws against organized crime and adversary state actors is hard isn't specific to AI.
Are you not aware of open-weights models and local generation? I think the vast majority of deepfake content is being genned in basements on RTX cards, not on public providers. People already have all this content, and have archives of it, and can run it airgapped. Cat is out of bag.
5 replies →
You generate it with that particular AI and look for the watermark :/
You don't have to prove anything? You just have to mark the outputs of your slop generator appropriately. "Proving" one way or another is their problem when it comes to enforcement.
I was talking about enforcement, not the user...