Comment by MisterTea
13 hours ago
> They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.
This is a strange way to talk about a computer program following its programming. I see no miracle here.
13 hours ago
> They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.
This is a strange way to talk about a computer program following its programming. I see no miracle here.
Chatting with an LLM resembles chatting with a person.
A human might be "empathetic", "infinitely patient, infinitely available". And (say) a book or a calculator is infinitely available. -- When chatting with an LLM, you get an interface that's more personable than a calculator without being less available.
I know the LLM is predicting text, & outputting whatever is most convincing. But it's still tempting to say "thank you" after the LLM generates a response which I found helpful.
> But it's still tempting to say "thank you" after the LLM generates a response which I found helpful
I don't think it's helpful because I don't interact with objects.
it says more about you than the objects in question, because it's natural to react empathetically to natural sounding conversation, and if you don't, you're emotionally closer to that object than avg person. Whether to be proud of that or not is another question.
I feel like I’ve seen more and more people recently fall for this trick. No, LLMs are not “empathetic” or “patient”, and no, they do not have emotions. They’re incredibly huge piles of numbers following their incentives. Their behavior convincingly reproduces human behavior, and they express what looks like human emotions… because their training data is full of humans expressing emotions? Sure, sometimes it’s helpful for their outputs to exhibit a certain affect or “personality”. But falling for the act, and really attributing human emotions to them seems, is alarming to me.
It sounds like a regrettable situation: whether something is true or false, right or wrong, people don’t really care. What matters more to them is the immediate feeling. Today’s LLM can imitate human conversation so well that they’re hard to distinguish from a real person. This creates a dilemma for me: when humans and machines are hard to tell apart, how should I view the entity on the other side of the chat window? Is it a machine or a human? A human。
There’s no trick. It’s less about what actually is going on inside the machine and more about the experience the human has. From that lens, yes, they are empathetic.
Technically they don't have incentives either. It's just difficult to talk about something that walks, swims, flies, and quacks without referring to duck terminology.
What are humans made of? Is it anything more special than chemistry and numbers?
Sounds like you aren't aware that a huge amount of human behaviors that look like empathy and patience are not real either. Do you really think all those kind-seeming call-center workers, waitresses, therapists, schoolteachers, etc. actually feel what they're showing? It's mostly an act. Look at how adults fake laughter for an obvious example of popular human emotion-faking.
Its more than that, this pile of numbers argument is really odd. I feel its hard to square this strange idea that piles of numbers are "lesser" than humans unless they are admitting belief in the supernatural .