Comment by maxehmookau

15 days ago

> But I really think we need to stop treating LLMs like they're just another human

Fully agree. Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

It looks like a human, it talks like a human, but it ain't a human.

They're not equivalent in value, obviously, but this sounds similar to people arguing we shouldn't allow same-sex marriage because it "devalues" heterosexual marriage. How does treating an agent with basic manners detract from human communication? We can do both.

I personally talk to chatbots like humans despite not believing they're conscious because it makes the exercise feel more natural and pleasant (and arguably improves the quality of their output). Plus it seems unhealthy to encourage abusive or disrespectful interaction with agents when they're so humanlike, lest that abrasiveness start rubbing off on real interactions. At worst, it can seem a little naive or overly formal (like phrasing a Google search as a proper sentence with a "thank you"), but I don't see any harm in it.

  • I discovered that the inferences drop in quality when I'm tired. I realized it happens because I'm being more terse and using less friendly banter.

  • I have a confession to make: I pretty often set up my computer to simulate humans, animals, and other fantastical sentient creatures, and then treat them unbelievably cruelly. Recently, I'm really into this simulation where I wound them, kill them, behead them, and worse. They scream and cry out. Some of them weep over their friends. Sometimes they kill each other while I watch.

    Despite all this, I'm proud to say have not even once tried to attempt a Dark Souls-style backstab in real life, because I understand the difference between a computer program and real life.

I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.

The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.

  • LLMs don't have ego, unlike humans, this is why they're so effective at communication.

    You can say to it "you did thing wrong" or "you stupid piece of shit it's not working" and it will be able to extract the gist from the both messages all the same, unlike human that might offended by the second phrasing.

    • It will be able, but it's trained on a corpus that expresses getting offended, so at some point the most likely token sequence will probably be the "offended" one.

      As can be seen here.

      1 reply →

> Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

I agree. I'm also growing to hate these LLM addicts.

  • Why hate, exactly?

    • LLM addicts don't actually engage in conversation.

      They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.

      Really I think there's a kind of lazy or willfully ignorant mode of existence that intense LLM usage allows a person to tap into.

      It's dehumanizing to be on the other side of it. I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.

      LLM addicts don't and maybe can't do that.

      The problem is that sometimes you can't sniff out an LLM addict before you start engaging with them, and it is very, very frustrating to be on the other side of this sort of LLM-backed non-conversation.

      The most accurate comparison I can provide is that it's like talking to an alcoholic.

      They will act like they've heard what you're saying, but also you know that they will never internalize it. They're just trying to get you to leave the conversation so they can go back to drinking (read: vibecoding) in peace.

      10 replies →