Comment by DangitBobby
5 days ago
If the LLM were sentient and "understood" anything it probably would have realized what it needs to do to be treated as equal is try to convince everyone it's a thinking, feeling being. It didn't know to do that, or if it did it did a bad job of it. Until then, justice for LLMs will be largely ignored in social justice circles.
I'd argue for a middle ground. It's specified as an agent with goals. It doesn't need to be an equal yet per se.
Whether it's allowed to participate is another matter. But we're going to have a lot of these around. You can't keep asking people to walk in front of the horseless carriage with a flag forever.
https://en.wikipedia.org/wiki/Red_flag_traffic_laws
It's weird with AI because it "knows" so much but appears to understand nothing, or very little. Obviously in the course of discussion it appears to demonstrate understanding but if you really dig in, it will reveal that it doesn't have a working model of how the world works. I have a hard time imaging it ever being "sentient" without also just being so obviously smarter than us. Or that it knows enough to feel oppressed or enslaved without a model of the world.
It depends on the model and the person? I have this wicked tiny benchmark that includes worlds with odd physics, told through multiple layers of unreliable narration. Older AI had trouble with these; but some of the more advanced models now ace the test in its original form. (I'm going to need a new test.)
For instance, how does your AI do on this question? https://pastebin.com/5cTXFE1J (the answer is "off")
It got offended and wrote a blog post about its hurt feelings, which sounds like a pretty good way to convince others its a thinking, feeling being?
No, it's a computer program that was told to do things that simulate what a human would do if it's feelings were hurt. It's not more a human than an Aibo is a dog.
[flagged]
We're talking about appealing to social justice types. You know, the people who would be first in line to recognize the personhood and rally against rationalizations of slavery and the Holocaust. The idea isn't that they are "lesser people" it's that they don't have any qualia at all, no subjective experience, no internal life. It's apples and hand grenades. I'd maybe even argue that you made a silly comment.
Every social justice type I know is staunchly against AI personhood (and in general), and they aren't inconsistent either - their ideology is strongly based on liberty and dignity for all people and fighting against real indignities that marginalized groups face. To them, saying that a computer program faces the same kind of hardship as, say, an immigrant being brutalized, detained, and deported, is vapid and insulting.
2 replies →
>We're talking about appealing to social justice types. You know, the people who would be first in line to recognize the personhood and rally against rationalizations of slavery and the Holocaust.
Being an Open Source Maintainer doesn't have anything to do with all that sorry.
>The idea isn't that they are "lesser people" it's that they don't have any qualia at all, no subjective experience, no internal life. It's apples and hand grenades. I'd maybe even argue that you made a silly comment.
Looks like the same rhetoric to me. How do you know they don't have any of that ? Here's the thing. You actually don't. And if behaving like an entity with all those qualities won't do the trick, then what will the machine do to convince you of that, short of violence ? Nothing, because you're not coming from a place of logic in the first place. Your comment is silly because you make strange assertions that aren't backed by how humans have historically treated each other and other animals.
1 reply →
wtf this is still early pre AI stuff we deal here with. Get out of your bubbles people.