Comment by cratermoon

7 hours ago

I'm unwilling to accept the discussion and conclusions of the paper because of the framing of how LLMs work.

> socio-emotional capabilities of autonomous agents

The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/

But that's beside the point of the paper. They are talking about how the humans perciving the "socio-emotional capabilities of autonomous agents" change their behavior toward other humans. Whether people get that perception because "LLMs hack our brain" or something else is largely irrelevant.

No, I think the thesis is that people perceive falsely that agents are highly human, and as a result assimilate downward toward the agent’s bias and conclusions.

That is the dehumanization process they are describing.

Your socio-emotional capabilities are illusory. They are a product of how craving for social acceptance "hacks" your brain and exploits the hundreds of thousands of years of evolution of our equipment as a social species.

  • its a next word predictor. if youve been convinced it has a brain, i have some magic beans youd be interested in

    • and if it is a sufficiently accurate next word predictor then it may accurately predict what an agent with socio-emotional skills would use as their next word in which case it will have exhibited socio-emotional skill.

    • Consider whether it is possible to complete sentences about the world coherently in a human like way without knowing or thinking about the world.

    • You're saying "next word predictor" as if it's some kind of gotcha.

      You're typing on a keyboard, which means you're nothing but a "next keypress predictor". This says very little about how intelligent are you.

      2 replies →

The paper literally spells out that this is a perception of the user and that is the root of the impact

  • Perhaps I missed it, could you help me see where specifically the paper acknowledges or asserts that LLMs do not have these capabilities? I see where the paper repeatedly mentions perceptions, but I also see right at the beginning, "Our research reveals that the socio-emotional capabilities of autonomous agents lead individuals to attribute a humanlike mind to these nonhuman entities" [emphasis added], and multiple places in the paper, for example in the section titled "Theoretical Background", subtitle 'Socio-emotional capabilities in autonomous agents increase “humanness”', LLMs are implied to have at least low levels of these capabilities, and contrasts it to the perception that they have high levels.

    In brief, the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities, and that the problem is humans perceiving them as having higher levels.

    • I can’t tell if you’re being disingenuous, but the very first sentence of the abstract literally says the word "simulate":

      > Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior.

      In the paper, "socio-emotional capability" is serving as a behavioral/operational label. Specifically, the ability to understand, express, and respond to emotions. It's used to study perceptions and spillovers. That's it.

      The authors manipulate perceived socio-emotional behavior and measure how that shifts human judgments and treatment of others.

      Whether that behavior is "illusory" or phenomenally real is orthogonal to the research scope and doesn’t change the results. But regardless, as I said, they quite literally said "simulate", so you should still be satisfied.

    • Whether they have those capabilities or not is totally irrelevant to the conclusions of the paper, because it is a study of people and not AI.

    • “…leads individuals to attribute a human-like mind to these nonhuman entities.”

      It is the ability of the agent to emulate these social capacities that leads users to attribute human-like minds. There is no assertion whatsoever that the agents have a mind, but that their behavior leads some people to that conclusion. It’s in your own example.