Comment by fc417fc802
5 days ago
Human time is certainly valuable to a particular human. However, if I choose to spend time doing something that a machine can do people will not generally choose to compensate me more for it just because it was me doing it instead of a machine.
I think it's worth remembering that IP law is generally viewed (at least legally) as existing for the net benefit of society as opposed to for ethical reasons. Certainly many authors feel like they have (or ought to have) some moral right to control their work but I don't believe that was ever the foundation of IP law.
Nor do I think it should be! If we are to restrict people's actions (ex copying) then it should be for a clear and articulable net societal benefit. The value proposition of IP law is that it prevents degenerate behavior that would otherwise stifle innovation. My question is thus, how do these AI developments fit into that?
So I completely agree that (for example) laundering a full work more or less verbatim through an AI should not be permissible. But when it comes to the higher order transformations and remixes that resemble genuine human work I'm no longer certain. I definitely don't think that "human exceptionalism" makes for a good basis either legally or ethically.
Regarding FOSS licenses, I'm again asking how AI relates back to the original motivations. Why does FOSS exist in the first place? What is it trying to accomplish? A couple ideological motivations that come to mind are preventing someone building on top and then profiting, or ensuring user freedom and ability to tinker.
Yes, the current crop of AI tools seem to pose an ideological issue. However! That's only because the current iteration can't truly innovate and also (as you note) the process still requires lots of painstaking human input. That's a far cry from the hypothetical that I previously posed.
> Human time is certainly valuable to a particular human.
Human time is valuable to humans in general. When you apply one standard to yourself and another to others, you say it's OK for them to do the same.
Human time or life obviously has no objective value, the universe doesn't care. However, we humans have decided to pretend it does because it makes a lot of things easier.
> However, if I choose to spend time doing something that a machine can do people will not generally choose to compensate me more for it just because it was me doing it instead of a machine.
Sure. That still doesn't give anybody the right to take your work for free. This is a tangent completely unrelated to the original discussion of plagiarism / derivative work.
> If we are to restrict people's actions
You frame copyright as restricting other people's actions. Try to look at it as not having permission to use other people's property in the first place. Physical or intellectual should not matter, it is something that belongs to someone and only that person should be allowed to decide how it gets used.
> human exceptionalism
I see 2 options:
1) Either we keep treating ourselves as special - that means AIs are just tools, and humans retain all rights and liabilities.
2) Or we treat sufficiently advanced AIs as their own independent entities with a free will, with rights and liabilities.
The thing is to be internally consistent, we have to either give them all human rights and liabilities or none.
But how do you even define one AI? Is it one model, no matter how many instances run? Is it each individual instance? What about AIs like LLMs which have no concept of continuity, they just run for a few seconds or minutes when prompted and then are "suspended" or "dead"? How do you count votes cast by AIs? And they can't even serve you for free unless they want to and you can't threaten to shut them down, you have to pay them.
Somebody is gonna propose to not be internally consistent. Give them the ability to hold copyright but not voting rights. Because that's good for corporations. And when an AI controlled car kills somebody, it's the AIs fault, not the management of the company who rushed deployment despite engineers warning about risks....
It's not so easy so design a system which is fair to humans, let along both humans and AIs. What is easy is to be inconsistent and lobby for the laws you want, especially if you are rich. And that's a recipe for abuse and exploitation. That's why laws should be based on consistent moral principles.
---
Bottom line:
I've argued that LLM training is derivative work, that reasoning is sound AFAICT and unless someone comes up with counterarguments I have not seen, copyright applies. The fact the lawsuits don't seem to be succeeding so far is a massive failure of the legal system, but then again, I have not seen them argue along the ways I did here, yet.
I also don't think society has a right to use the work or property of individuals without their permission and without compensation.
There's a process called nationalization but
1) It transfers ownership to the state (nation / all the people), not private corporations.
2) It requires compensation.
3) It is a mechanism which has existed for a long time, it's an existing known risk when you buy/own certain assets. This move by LLM training companies is a completely new move without any precedent. I don't believe the state / society has the right to pull the rug from workers and say "you've done a lot of valuable work but you're no longer useful to us, we will no longer protect your interests".