← Back to context

Comment by hellojesus

2 years ago

How do you expect to regulate this and prove generative models were used? What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?

> How do you expect to regulate this and prove generative models were used?

Disseminating or creating copies of content derived from generative models without attribution would open that actor up to some form of liability. There's no need for onerous regulation here.

The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks. The broad existing (and severely flawed!) example of copyright legislation seems instructive.

All I'll opine is that the main goal here isn't really to prevent Jonny Internet from firing up llama to create a reddit bot. It's to incentivize large commercial and political interests to disclose their usage of generative AI. Similar to current copyright law, the fear of legal action should be sufficient to keep these parties compliant if the law is crafted properly.

> What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?

Not really sure why the origin of the company(s) in question is relevant here. If they distribute generative content without attribution, they should be liable. Same as if said "third party" gave them copyright-violating content.

EDIT: I'll take this as an opportunity to say that the devil is in the details and some really crappy legislation could arise here. But I'm not convinced by the "It's not possible!" and "Where's the line!?" objections. This clearly is doable, and we have similar legal frameworks in place already. My only additional note is that I'd much prefer we focus on problems and questions like this, instead of the legislative capture path we are currently barrelling down.

  • > It's to incentivize large commercial and political interests to disclose their usage of generative AI.

    You would be okay allowing small businesses exception from this regulation but not large businesses? Fine. As a large business I'll have a mini subsidiary operate the models and exempt myself from the regulation.

    I still fail to see what the benefit this holds is. Why do you care if something is generative? We already have laws against libal and against false advertising.

    • > You would be okay allowing small businesses exception from this regulation but not large businesses?

      That's not what I said. Small businesses are not exempt from copyright laws either. They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.

      > I still fail to see what the benefit this holds is.

      I have found recent arguments by Harari (and others) that generative AI is particularly problematic for discourse and democracy to be persuasive [1][2]. Generative content has the potential, long-term, to be as disruptive as the printing press. Step changes in technological capabilities require high levels of scrutiny, and often new legislative regimes.

      EDIT: It is no coincidence that I see parallels in the current debate over generative AI in education, for similar reasons. These tools are ok to use, but their use must be disclosed so the work done can be understood in context. I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.

      1. https://www.economist.com/by-invitation/2023/04/28/yuval-noa... 2. https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-c...

      9 replies →

  • This is a ridiculous proposal, and obviously not doable. Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.

    It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant. Centuries ago some authoritarians raised similar concerns over printing presses.

    And copyright is an entirely separate issue.

    • > Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.

      I disagree. What is vague about "generative content must be disclosed"?

      What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.

      > It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant.

      Again, I disagree. The line between tools and actors will only blur further in the future without action.

      > Centuries ago some authoritarians raised similar concerns over printing presses.

      I'm pretty clearly not advocating for a "smash the presses" approach here.

      > And copyright is an entirely separate issue.

      It is related, and a model worth considering as it arose out of the last technical breakthrough in this area (the printing press, mass copying of the written word).

      2 replies →

  • > The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks.

    You're proposing a law. How does it work?

    Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.

    But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.

    • > Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.

      This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.

      > The reason you want it to be labeled is for the cases where you can't tell.

      This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.

      > But how is the government, or anyone, supposed to prove this?

      Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.

      This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.

      3 replies →