← Back to context

Comment by kingkawn

18 hours ago

The paper literally spells out that this is a perception of the user and that is the root of the impact

Perhaps I missed it, could you help me see where specifically the paper acknowledges or asserts that LLMs do not have these capabilities? I see where the paper repeatedly mentions perceptions, but I also see right at the beginning, "Our research reveals that the socio-emotional capabilities of autonomous agents lead individuals to attribute a humanlike mind to these nonhuman entities" [emphasis added], and multiple places in the paper, for example in the section titled "Theoretical Background", subtitle 'Socio-emotional capabilities in autonomous agents increase “humanness”', LLMs are implied to have at least low levels of these capabilities, and contrasts it to the perception that they have high levels.

In brief, the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities, and that the problem is humans perceiving them as having higher levels.

  • I can’t tell if you’re being disingenuous, but the very first sentence of the abstract literally says the word "simulate":

    > Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior.

    In the paper, "socio-emotional capability" is serving as a behavioral/operational label. Specifically, the ability to understand, express, and respond to emotions. It's used to study perceptions and spillovers. That's it.

    The authors manipulate perceived socio-emotional behavior and measure how that shifts human judgments and treatment of others.

    Whether that behavior is "illusory" or phenomenally real is orthogonal to the research scope and doesn’t change the results. But regardless, as I said, they quite literally said "simulate", so you should still be satisfied.

  • Whether they have those capabilities or not is totally irrelevant to the conclusions of the paper, because it is a study of people and not AI.

  • “…leads individuals to attribute a human-like mind to these nonhuman entities.”

    It is the ability of the agent to emulate these social capacities that leads users to attribute human-like minds. There is no assertion whatsoever that the agents have a mind, but that their behavior leads some people to that conclusion. It’s in your own example.