Comment by AnthonyMouse

2 years ago

> The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks.

You're proposing a law. How does it work?

Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.

But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.

> Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.

This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.

> The reason you want it to be labeled is for the cases where you can't tell.

This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.

> But how is the government, or anyone, supposed to prove this?

Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.

This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.

  • > This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.

    But then why do you need any new laws at all? We already have laws against false advertising and breach of contract. If you want to declare that a space is exclusively human-generated content, what stops you from doing this under the existing laws?

    > Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.

    Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated. In order to prove it you would need some way of distinguishing machine-generated content, which if you had it would make the law irrelevant.

    > This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.

    Doing nothing can be better than doing either of two things that are both worse than nothing.

    • > But then why do you need any new laws at all? We already have laws against false advertising and breach of contract.

      My preference would be for generative content to be disclosed as such. I am aware of no law that does this.

      Why did we pass the FFDCA for disclosures of what's in our food? Because the natural path that competition would lead us down would require no such disclosure, so false advertising laws would provide no protection. We (politically) decided it was in the public interest for such things to be known.

      It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.

      > Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated.

      You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.

      > Doing nothing can be better than doing either of two things that are both worse than nothing.

      Agreed. However, I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.

      1 reply →

> You're proposing a law. How does it work?

The same way law works today, or do you think this is the first time the law has had to deal with fuzziness?

  • "Someone else will figure that out" isn't a valid response when the question is whether or not something is any good, because to know if it's any good you need to know what it actually does. Retreating into "nothing is ever perfect" is just an excuse for doing something worse instead of something better because no one can be bothered, and is how we get so many terrible laws.

    • you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.

      One of the best descriptions I've seen on HN is this.

      Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).

      In truth, lots of things in the world aren't perfectly defined and the law deals with them just fine. One such example is the reasonable person standard.

      > As a legal fiction,[3] the "reasonable person" is not an average person or a typical person, leading to great difficulties in applying the concept in some criminal cases, especially in regard to the partial defence of provocation.[7] The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances.[8][9] While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself.[10][11] The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law.

      > The standard is also used in contract law,[12] to determine contractual intent, or (when there is a duty of care) whether there has been a breach of the standard of care. The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties.[13]

      > The standard does not exist independently of other circumstances within a case that could affect an individual's judgement.

      Pay close attention to this piece

      > or (when there is a duty of care) whether there has been a breach of the standard of care.

      One could argue that because standard of care cannot ever be perfectly defined it cannot be regulated via law. One would be wrong, just as one would be wrong attempting to make that argument for why AI shouldn't be regulated.

      4 replies →