Comment by imiric

1 day ago

> The output of artists has copyright.

Copyright is a very messy and divisive topic. How exactly can an artist claim ownership of a thought or an image? It is often difficult to ascertain whether a piece of art infringes on the copyright of another. There are grey areas like "fair use", which complicate this further. In many cases copyright is also abused by holders to censor art that they don't like for a myriad of unrelated reasons. And there's the argument that copyright stunts innovation. There are entire art movements and music genres that wouldn't exist if copyright was strictly enforced on art.

> Artists shape the space in which they’re generating output.

Art created by humans is not entirely original. Artists are inspired by each other, they follow trends and movements, and often tiptoe the line between copyright infringement and inspiration. Groundbreaking artists are rare, and if we consider that machines can create a practically infinite number of permutations based on their source data, it's not unthinkable that they could also create art that humans consider unique and novel, if nothing else because we're not able to trace the output to all of its source inputs. Then again, those human groundbreaking artists are also inspired by others in ways we often can't perceive. Art is never created in a vacuum. "Good artists copy; great artists steal", etc.

So I guess my point is: it doesn't make sense to apply copyright to art, but there's nothing stopping us from doing the same for machine-generated art, if we wanted to make our laws even more insane. And machine-generated art can also set trends and shape the space they're generated in.

The thing is that technology advances far more rapidly than laws do. AI is raising many questions that we'll have to answer eventually, but it will take a long time to get there. And on that path it's worth rethinking traditional laws like copyright, and considering whether we can implement a new framework that's fair towards creators without the drawbacks of the current system.

Ambiguities are not a good argument against laws that still have positive outcomes.

There are very few laws that are not giant ambiguities. Where is the line between murder, self-defense and accident? There are no lines in reality.

(A law about spectrum use, or registered real estate borders, etc. can be clear. But a large amount of law isn’t.)

Something must change regarding copyright and AI model training.

But it doesn’t have to be the law, it could be technological. Perhaps some of both, but I wouldn’t rule out a technical way to avoid the implicit or explicit incorporation of copyrighted material into models yet.

  • > There are very few laws that are not giant ambiguities. Where is the line between murder, self-defense and accident? There are no lines in reality.

    These things are very well and precisely defined in just about every jurisdiction. The "ambiguities" arise from ascertaining facts of the matter, and whatever some facts fits within a specific set of set rules.

    > Something must change regarding copyright and AI model training.

    Yes, but this problem is not specific to AI, it is the question of what constitutes a derivative, and that is a rather subjective matter in the light of the good ol' axiom of "nothing is new under the sun".

    • > These things are very well and precisely defined in just about every jurisdiction.

      Yes, we have lots of wording attempting to be precise. And legal uses of terms are certainly more precise by definition and precedent than normal language.

      But ambiguities about facts are only half of it. Even when all the facts appear to be clear, human juries have to use their subjective human judgement to pair up what the law says, which may be clear in theory, but is often subjective at the borders, vs. the facts. And reasonable people often differ on how they match the two up in many borderline cases.

      We resolve both types of ambiguities case-by-case by having a jury decide, which is not going to be consistent from jury to jury but it is the best system we have. Attorneys vetting prospective jurors are very much aware that the law comes down to humans interpreting human language and concepts, none of which are truly precise, unless we are talking about objective measures (like frequency band use).

      ---

      > it is the question of what constitutes a derivative

      Yes, the legal side can adapt.

      And the technical side can adapt too.

      The problem isn't that material was trained on, but that the resulting model facilitates reproducing individual works (or close variations), and repurposing individual's unique styles.

      I.e. they violate fair use by using what they learn in a way that devalues other's creative efforts. Being exposed to copyrighted works available to the public is not the violation. (Even though it is the way training currently happens that produces models that violate fair use.)

      We need models that one way or another, stay within fair use once trained. Either by not training on copyrighted material, or by training on copyrighted material in a way that doesn't create models that facilitate specific reproduction and repurposing of creative works and styles.

      This has already been solved for simple data problems, where memorization of particular samples can be precluded by adding noise to a dataset. Important generalities are learned, but specific samples don't leave their mark.

      Obviously something more sophisticated would need to be done to preclude memorization of rich creative works and styles, but a lot of people are motivated to solve this problem.

>Art created by humans is not entirely original.

The catch here is that a human can use single sample as input, but AI needs a torrent of training data. Also when AI generates permutations of samples, does their statistic match training data?

  • No human could use a single sample if it was literally the first piece of art they had ever seen.

    Humans have that torrent of training data baked in from years of lived experience. That’s why people who go to art school or otherwise study art are generally (not always of course) better artists.

    • I don't think the claim that the value of art school simply being more exposure to art holds water.

  • A skilled artist can imitate a single art style or draw a specific object from a single reference. But becoming a skilled artist takes years of training. As a society we like to pretend some humans are randomly gifted with the ability to draw, but in reality it's 5% talent and 95% spending countless hours practising the craft. And if you count the years worth of visual data the average human has experienced by the time they can recreate a van Gogh then humans take magnitudes more training data than state of the art ML models

    • In case of an ML model either a very good description or that single reference could be added to the context.

  • Not without a torrent of pre-training data. The qualitative differences are rapidly becoming intangible ‘soul’ type things.