Comment by yarn_
6 hours ago
"It would be astonishing if people were able to casually not antropomorphize LLMs"
Precisely. Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).
There are times when trying to use Claude for coding that I genuinely get annoyed at it, and I find it cathartic to include this emotion in my prompt to it, even though I know it doesn't have feelings; expressing emotions rather than bottling them up often can be an effective way to deal with them. Sometimes this does even influence how it handles things, noting my frustration in its "thinking" and then trying to more directly solve my immediate problem rather than trying to cleverly work around things in a way I didn't want.
What are the odds that Anthropic is building a psychological profile on you based on your prompts and when and how quickly you lose control over your emotions?
Worse, models often perform better when using that natural language because that's what kind of language they were trained on. I say worse because by speaking that way to them you will also naturally humanize them too.
(As a ml researcher) I think one of the biggest problems we have is that we're trying to make a duck by making an animatronic duck indistinguishable from a real duck. In some sense this makes a lot of sense but it also only allows us to build a thing that's indistinguishable from a real duck to us, not indistinguishable from a real duck to something/someone else. It seems like a fine point, but the duck test only allows us to conclude something is probably a duck, not that it is a duck.
Yes, I've experienced the sense that there's a person on the "other end" even when I have been perfectly aware that it's a bag of matrices. Brains just have people-detectors that operate below conscious awareness. We've been anthropomorphizing stuff as impersonal as the ocean for as long as there have been people, probably.
Maybe it is a dangerous habit to instruct entities in plain English without anthropomorphizing them to some extent, without at least being polite? It should feel unnatural do that.
Yeah, my instinct is that we're naturally going to have emotions resulting from anything we interact with based on language, and trying to suppress them will likely not be healthy in the long run. I've also seen plenty of instances of people getting upset when someone who isn't a native speaker of their language or even a pet that doesn't speak any language doesn't understand verbal instructions, so there's probably something to be said for learning how to be polite even when experiencing frustration. I've definitely noticed that I'm far more willing to express my annoyance at an LLM than I am another actual human, and this does make me wonder whether this is a habit I should be learning to break sooner rather than later to avoid it having any affect on my overall mindset.
It does feel unnatural to me. I want to be frugal with compute resource but I then have to make sure I still use appropriate language in emails to humans.
This. Right now, I'm assuming you're all humans, and so are all my coworkers, and the other people driving cars around me and etc. How easy is it to dehumanize actual humans? If I don't try to remain polite in all written English conversations, including the LLMs, that's going to trickle over to the rest of my interactions too. Doesn't mean they deserve it, just that it's a habit I know I need to maintain.
1 reply →