Comment by simonw
18 hours ago
The Lord of the Rings: The Return of the King back in 2003 used early AI VFX software MASSIVE to animate thousands of soldiers in battle: https://en.wikipedia.org/wiki/MASSIVE_(software) - I don't think that was controversial at the time.
According to that Wikipedia page MASSIVE was used for Avengers: Endgame, so it's had about a 20 year run at this point.
The problem is not AI per se (which are only a mix of algorithms). The problem is that this new wave of AI is trained in propietary content, and the owners/creators didn't allow it in the first place.
If this AI worked without training, no one would say anything.
> If this AI worked without training, no one would say anything.
I don’t believe that for one second.
People are rightfully scared of professional and economic disruption. OMG training is just a convenient bit of rhetoric to establish the moral high ground. If and when AIs appear that are entirely trained on public domain and synthetic data, there will be some other moral argument.
Yeah I'm not interested in "art" created by a computer. A watercolor by a first-grader is more interesting.
Same goes for music. If you need AI and autotune, find another way to earn a living.
2 replies →
Yea, it definitely is just a convenient argument to people that feel threatened. I in no way feel as though the same internet that has so consistently disregarded copyright laws with such reckless abandon is now clutching their pearls about this.
1 reply →
People would still be griping about how it devalues the hard work artists have put in, "isn't real art" and all the other things. The only difference is the public at large would be telling them to put a sock in it, rather than having some sympathy because of deceptive articles about how big tech is stealing from hardworking artists.
Yes they're two different issues from AI:
- LLMs were trained on copy protected content and devaluing the input a worker puts into creating original work
- LLMs are a tool for generating statistical variations and refinements of work, this doesn't devalue the input but makes generating output easier
Form vs Function issues. So it would be preferable to give people a legal pathway to continue making money and own their work instead of allowing their work to be vacuumed up by the people at corporations looking to automate them away. The functional issue still exists but doesn't put your personal work at risk of theft/abuse outside of it's economic intent. Then the social stigma doesn't really matter because "an LLM is just a tool" is now a solid argument not causing abuse or deterioration of existing legal protections.
their consent was not required. https://en.wikipedia.org/wiki/Transformative_use
petabytes of training data are transformed into mere gigabytes of model weights. no existing copyright laws are violated. until new laws declare that permission is required, this is a non-argument.
>If this AI worked without training, no one would say anything.
adobe firefly was trained on licensed content, and rest assured, the anti-AI zealots don't give it a pass.
the copyright is just one of the many angles they use to decry the thing that threatens their jobs.
There is no final word on the matter yet and there are counterpoints to the "Transformative use" argument.
https://www.reuters.com/legal/litigation/judge-meta-case-wei...
> "You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," Chhabria told Meta's attorneys. "You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person."
> "I just don't understand how that can be fair use," Chhabria said.
https://ipwatchdog.com/2025/05/12/copyright-office-weighs-ai...
> Stylistic imitation even without substantial similarity would likely be implicated under such a [market-dilution] theory, which could be considered as a market effect under factor four that diminishes the value of the original work used to train the model.
1 reply →
I don't know how they verify it, but the article claims the model mentioned ("Moonvalley") trained an entirely clean/licensed data model.
I'd say the comparison points at misunderstanding the current controversy, though I realize you are doing so deliberately to ask "Is it really that different if you think about it?"
But I'll bite. MASSIVE is a crowd simulation solution, the assets that go into the sim are still artist-created. Even in 2003, people were already used to this sort of division of labor. What the new AI tools do is shift the boundary between artists providing input parameters and assets vs. computer doing what its good at massively and as a big step change. It's the magnitude of the step change causing the upset.
But there's also another reason that artists are upset, which I think is the one that most tech people don't really understand. Of course industrial-scale art does lean on priors (sample and texture banks, stock images, etc.) but by and large operations still have a sort if point of pride to re-do things from scratch where possible for a given production rather than re-use existing elements, also because it's understood that the work has so many variables it will come out a little different and add unique flavor to the end product. Artists see generative AI as regurgitation machines, interrupting that ethic of "this was custom-made anew for this work".
This is typically not an idea that software engineers share much. We are comfortable and even advised to re-use existing code as is. At most we consider "I rewrote this myself though I didn't need to" a valuable learning exercise, but not good professional practice (cf. ridicule for NIHS).
This is one of the largest difference in the engineering method vs. the artist's method. If an artist says "we went out there and recorded all this foley again by hand ourselves for this movie", it's considered better art for it. If a programmer says "I rolled my own crypto for my password manager SaaS", they're in incredibly poor judgement.
It's a little like convincing someone that a lab-grown gemstone is identical to one dug up at the molecular level, even: Yes, but the particular atoms, functionally identical or not, have a different history to them. To some that matters, and to artists the particulars of the act of creation matters a lot.
I don't think the genie can be put back in the bottle and most likely we'll all just get used to things, but I think capturing this moment and what it did to communities and trades purely as a form of historical record is somehow valueable. I hope the future history books do the artists' lament justice, because there is certainly something happening to the human condition here.
I really like your comparison there between reused footage and reused code, where rolling your own password crypto is seen as a mistake.
There's plenty of reuse culture in movies and entertainment too - the Wilhelm scream, sampling in music - but it's all very carefully licensed and the financial patterns for that are well understood.
This is just shifting the goal posts though. I remember people making similar arguments in the early days of Photoshop, digital camera (and what constitutes a "real" photographer), CGI, etc.
I agree the magnitude of the step change is upsetting, though.
Right, I agree the sentiment isn't new, I'm mostly just trying to explain that way of thinking.
But yeah, the tension between placing a value on doing things just in time vs. reducing the labor by using tools or assets has surely always been there in commercial art.
1 reply →
This is AI in the gamedev sense, not the present-hype sense.