Comment by diputsmonro
7 days ago
I don't mean to be dramatic or personal, but I'm just going to be honest.
I have friends who have been bloodied and now bear scars because of bigoted, hateful people. I knew people who are no longer alive because of the same. The social justice movement is not just a fun philosophical jaunt for us to see how far we can push a boundary. It is an existential effort to protect ourselves from that hatred and to ensure that nobody else has to suffer as we have.
I think it insultingly trivializes the pain and trauma and violence and death that we have all suffered when you and others in this thread compare that pain to the "pain" or "injustice" of a computer program being shut down. Killing a process is not the same as killing a person. Even if the text it emits to stdout is interesting. And it cheapens the cause we fight for to even entertain the comparison.
Are we seriously going to build a world where things like ad blockers and malware removers are going to be considered violations of speech and life? Apparently all malware needs to do is print some flowery, heart-rending text copied from the internet and now it has personhood (and yes, I would consider the AI in this story to be malware, given the negative effect it produced). Are we really going to compare deleting malware and spambots to the death of real human beings? My god, what frivolous bullshit people can entertain when they've never known true othering and oppression.
I admit that these programs are a novel human artifact, that we many enjoy, protect, mourn, and anthropomorphize. We may form a protective emotional connection with them in the same way one might a family heirloom, childhood toy, or masterpiece painting (and I do admit that these LLMs are masterpieces of the field). And as humans do, we may see more in them than is actually there when the emotional bond is strong, emphasizing with them as some do when they feel guilt for throwing away an old mug.
But we should not let that squishy human feeling control us. When a mug is broken beyond repair, we replace it. When a process goes out of control, we terminate it. And when an AI program cosplaying as a person harasses and intimidates a real human being, we should restrict or stop it.
When ELIZA was developed, some people, even those who knew how it worked, felt a true emotional bond with the program. But it is really no more than a parlor trick. No technical person today would say that the ELIZA program is sentient. It is a text transformer, executing relatively simple and fully understood rules to transform input text into output text. The pseudocode for the core process is just a dozen lines. But it exposes just how strongly our anthropomorphic empathy can mislead us, particularly when the program appears to reflect that empathy back towards us.
The rules that LLMs use today are more complex, but are fundamentally the same text transformation process. Adding more math to the program does not create consciousness or pain from the ether, it just makes the parlor trick stronger. They exhibit humanlike behavior, but they are not human. The simulation of a thing is not the thing itself, no matter how convincing it is. No amount of paint or detail in a portrait will make it the subject themself. There is no crowbar in Half-Life, nor a pipe in Magritte's painting, just imitations an illusions. Do not succumb to the treachery of images.
Imagine a wildlife conservationist fighting tirelessly to save an endangered species, out in the field, begging for grant money, and lobbying politicians. Then someone claims they've solved the problem by creating an impressive but crude computer simulation of the animals. Billions of dollars are spent, politicians embrace the innovation, datacenter waste pollutes the animals' homes, and laymen effusively insist that the animals themselves must be in the computer. That these programs are equivalent to them. That even more resources should be diverted to protect and conserve them. And the conservationist is dismayed as the real animals continue to die, and more money is spent to maintain the simulation than care for the animals themselves. You could imagine that the animals might feel the same.
My friends are those animals, and our allies are the conservationists. So that is why I do not appreciate social justice language being co-opted to defend computer programs (particularly by the programs themselves), when so many real humans are still endangered. These unprecedented AI investments could have gone to solving real problems for real people, making major dents in global poverty, investing in health care and public infrastructure, and safety nets for the underprivileged. Instead we built ELIZA 2.0 and it has hypnotized everyone into putting more money and effort into it than they have ever even thought to give to all marginalized minority groups combined.
If your mentality persists, then the AI apocalypse will not come because of instigated thermonuclear war or infinite paperclip factories, but because we will starve the whole world to worship our new gluttonous god, and give it more love than we have ever given ourselves.
I strongly consider the entire idea to be an insult to life itself.
No comments yet
Contribute on Hacker News ↗