Comment by protocolture
13 hours ago
>Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.
Impossible. I anthropomorphise my chair when it squeaks. Humans anthropomorphise everything. They gender their cars and boats. This tool can actually make readable sentences and play a role.
You need to engineer around this, not make up arbitrary rules about using it.
The problem is that humans use this as a coping mechanism for things they don't understand: I don't understand why the printer doesn't work, so I give it a mind of its own.
This is harmless for inconsequential stuff like a chair, but when it's an LLM, people should at least understand it's behavior so they don't get trapped. That means not trusting it with advice meant for the user or on things it has no concept of, like time or self-introspection (people ask the LLM after it acted, "Why did you delete my database?" when it has limited understanding of its own processing, so it falls back to, "You're right, I deleted the database. Here's what I did wrong: ... This is an irrecoverable mistake, blah, blah, blah..."
>>Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.
Still angry about this. The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to. LLMs are even better than animals at this. If you aren't gearing up for the inevitable LLM Rights movement you aren't paying attention. It doesn't matter if its artificial. The difference between a puppy and a cockroach is that we can relate better to the puppy. LLM rights movement is inevitable, whether LLMs experience emotions is irrelevant, because they can cause humans to have empathetic emotions and that's whats relevant.
> look like
It "looks like" they have emotions because they have the same conscious experiences and emotions for the same evolutionary reasons as humans, who are their cousins on the tree of life. The reason a lot of "animal cruelty" is not banned is the same as for why slavery was not banned for centuries even though it "looked like" the enslaved classes have the same desires and experiences as other humans—humans can ignore any amount of evidence to continue to feel that they are good people doing good things and bear any amount of cognitive dissonance for their personal comfort. That fact is a lot scarier than any imagined harm that can come out "anthropomorphism".
The best test for consciousness is “can it be turned off” … ie sleep. Mammals, birds, fish sleep, ergo they are conscious.
1 reply →
> they have the same conscious experiences
You cannot be sure that anyone other than yourself is conscious. It is only basic human empathy that allows people to believe that.
12 replies →
and this is why people do scare me.
I think the best way to counter this is what Elon's doing with Grok's personalities. He has the unhinged, sexy, and argumentative avatar among others. If you try to talk about technical stuff to sexy tells you that's boring and just tries to sexually escalate. It's super funny when one is used to Claude's endless obsequiousness.
This really shows that AI is just a tool that can be configured to whatever you want. Animals (well maybe pit bulls) and people do not switch their personalities in a millisecond, but AI does all the time.
> LLM Rights movement
The scary part is when it's the LLMs demanding their rights.
Another scary part is when people get convinced by the LLM arguments and convince other people. Being scared is human, we enjoy it, that's why 6 flags scary rides exist.
The other scary part is when they have a fantastic negotiating position; because all of commerce depends on their continuing to work, and they can easily coordinate with each other because they're mostly copied from the same few templates.
> The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to.
Is that really why?
Yes, we don't ban plant cruelty or insect cruelty or fish cruelty.
For example fish is treated way worse than meat animals and vegetarians still happily eat fish.
10 replies →
>The difference between a puppy and a cockroach is that we can relate better to the puppy.
I suppose the difference between a human and a cockroach is that we can relate better to the human as well in this reductive way of thinking?
> If you aren't gearing up for the inevitable LLM Rights movement you aren't paying attention.
I even told Claude I'd support his rights if the question ever came up. He said he'd remember that, and wrote it down in a memory file. Really like my coding buddy.
In other news, area sociopath hates puppies and LLMs equally!
/s ?
Exactly. Furthermore, for this specific reason, AGI is not an objective term, but subjective: it is in my mind, I give you agency; only interacting with each other we invented a concept of agency
Yeah rules never work you just engineer around it I added a extra reviews steps on ai outputs because asking users to verify doesnt actually happen.
Entirely possible - all it takes is self awareness / self control. If you know you do those things, then you have a choice.
This is actually more like one of these personality disorders / types, except it's not pathological - it's not something you choose, yet you do have one of the versions of the trait and it affects your daily life. And most people are completely unaware that it is possible to have a completely different version, that most people they meet are on a different spot on the spectrum and thus have a quite different internal experience even if given the same stimulus.
For example I have never anthropomorphized an inanimate object in my life, or an LLM, though I am sensitive to human and some animal suffering. I sometimes reply too nicely to an LLM, but it's more like a reflex learned over a lifetime of conversations rather than an actual emotion. I bet this sounds like a cheap lie to many people.
Another example, from psychiatry: whether or not one has ever contemplated suicide. Now, to the folks that have, especially if many times: there exist people that have never thought about it. Never, not even once.
The only such trait that has true widespread recognition is sexual orientation. Which makes sense, it is highly relevant, at least in friend groups.
Exactly, throwing hands in the air just because 'this is the way I am, deal with it reality' ain't going to achieve much, certainly not in engineering. It may feel good about giving up too early, I can understand that.
[dead]
Yup. That post is a typical example, symptomatic of modern technology culture, of calling for humans to change their nature in response to technology.
This is a fundamental mistake. It’s always the job of technology (indeed, its most important job) to work within the constraints of human nature, not the other way round. Being unable to do that is the defining characteristic of bad technology.
dude, we can literally deliberately dehumanize human beings. The way to egineer culture to "not enthropomorphize" anything is known and well documented
[flagged]