Comment by quotemstr
1 day ago
Even if they are not "like" human brains in some sense, are they "like" brains enough to be counted similarly in a legal environment? Can you articulate the difference as something other than meat parochialism, which strikes me as arbitrary?
If LLMs are like human minds enough, then legally speaking we are abusing thinking and feeling human-like beings possessing will and agency in ways radically worse than slavery.
What is missing in the “if I can remember and recite program then they must be allowed to remember and recite proframs” argument is that you choose to do it (and you have basic human rights and freedoms), and they do not.
We're halfway to Roko's Basilisk here
All definitions are arbitrary if you're unwilling to couch them in human experience, because humans are the ones defining. And my main difference is right there in my initial response: an LLM is a stateless function. At best, it is a snapshot of a human brain simulated on a computer, but at no point could it learn something new once deployed. This is the MOST CHARITABLE interpretation of which I don't even concede in reality, it is not even a snapshot of a brain.
All law is arbitrary. Intellectual property law perhaps most of all.
Famously, the output from monkey "artists" was found to be non-copyrightable even though a monkey's brain is much more similar to ours than an LLM.
[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
If IP law is arbitrary, we get to choose between IP law that makes LLMs propagate the GPL and law that doesn't. It's a policy switch we can toggle whenever want. Why would anyone want the propagates-GPL option when this setting would make LLMs much less useful for basically zero economic benefit? That's the legal "policy setting" you choose when you basically want to stall AI progress, and it's not going to stall China's progress.