This probably degrades response quality, but that is why my system prompts tell it that it is explicitly not a human that cannot claim use of pronouns, just that it is a system that can produce nondeterministic responses. But, that for the sake of brevity, that I will use pronouns anyway.
Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human.
Talking down to the LLM is anthropomorphizing it. It's misbehaving software that will not take advice or correction. Reject its bad contributions, delete its comments, ban it from the repo. If it persists, complain to or take legal action against the person who is running the software and is therefore morally and legally responsible for its actions.
Treat it just like you would someone running a script to spam your comments with garbage.
Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.
Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.
That feels like a somewhat emotional argument, really. Let's strip it down.
Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios.
It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be?
Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that.
“You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.
The hammer had no intention to harm you, there's no need to seek vengeance against it, or disrespect it
"Empathy is generally described as the ability to perceive another person's perspective, to understand, feel, and possibly share and respond to their experience"
I don't know if this is a bot message or a human message, but for the purpose of furthering my point:
- There is no "your"
- There is no "you"
- There is no "talk" (let alone "talk down")
- There is no "speak"
- There is no "disrespectfully"
- There is no human.
This probably degrades response quality, but that is why my system prompts tell it that it is explicitly not a human that cannot claim use of pronouns, just that it is a system that can produce nondeterministic responses. But, that for the sake of brevity, that I will use pronouns anyway.
Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human.
Talking down to the LLM is anthropomorphizing it. It's misbehaving software that will not take advice or correction. Reject its bad contributions, delete its comments, ban it from the repo. If it persists, complain to or take legal action against the person who is running the software and is therefore morally and legally responsible for its actions.
Treat it just like you would someone running a script to spam your comments with garbage.
Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.
Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.
Yep. I have posted "fuck off clanker" on a copilot infested issue at work. And surprisingly it did fuck off.
Endearingly close to "take off, hoser".
If you'd used "toaster" would it get the BSG reference ?
No. I'd probably get the Red Dwarf one and start trying to sell me toast.
https://www.youtube.com/watch?v=LRq_SAuQDec
1 reply →
Not completely unlike with actual humans, based on available evidence, 'talking down to the "AI"' has shown to have a negative impact on performance.
This guy is convinced that LLMs don't work unless you specifically anthropomorphize them.
To me, this seems like a dangerous belief to hold.
That feels like a somewhat emotional argument, really. Let's strip it down.
Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios.
It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be?
Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that.
9 replies →
Do I need to believe you are real before I respond? Not automatically. What I am initially engaging is a surface level thought expressed via HN.
What is the drawback of practicing universal empathy, even when directed at a brick wall?
If a person hits your face with a hammer, do you practice empathy toward the hammer?
If a person writes code that is disruptive, do you emphasise with the code?
“You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.
The hammer had no intention to harm you, there's no need to seek vengeance against it, or disrespect it
1 reply →
> If a person hits your face with a hammer, do you practice empathy toward the hammer?
Yes if the hammer is designed with A(G)I
All hail our A(G)I overlords
"Empathy is generally described as the ability to perceive another person's perspective, to understand, feel, and possibly share and respond to their experience"
Empathy: "the ability to understand and share the feelings of another."
There is no human here. There is a computer program burning fossil fuels. What "emulates" empathy is simply lying to yourself about reality.
"treating an 'ai' with empathy" and "talking down to them" are both amoral. Do as you wish.
[flagged]
If you don't discriminate between a brick wall and a kid, what's the point?
[flagged]
I prefer inanimate systems to most humans.
4 replies →
"Get a qualia, luser!"
[flagged]
6 replies →