AI-induced dehumanization (2024)

4 hours ago (myscp.onlinelibrary.wiley.com)

I can't tell if I'm just getting old, but the last 2 major tech cycles (cryptocurrency and AI) have both seemed like net negatives for society. I wonder if this is how my parents felt about the internet back in the 90s.

Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.

  • This parallel is something that I've been mulling over for the better part of this year.

    Are we simply getting old and bitter?

    Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.

    Are we really better or worse off than a few decades ago?

    • > Are we simply getting old and bitter?

      No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."

      1 reply →

    • > Are we simply getting old and bitter?

      For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.

      The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.

    • > Are we simply getting old and bitter?

      Maybe, but it has nothing to do with change itself.

      Change can be either positive or negative. Often it is objectively negative and can stay that way for decades.

      1 reply →

  • I think the progression of sentiment is basically the same. There were lots of folks pushing the agenda that connecting us all would somehow bring about the evolution of the human race by putting information at our fingertips that was eventually followed by concern about kids getting obsessed/porn-saturated.

    The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.

    The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.

  • > Interestingly, both technologies also supercharge scams

    Similar for internet back in the 90s Nigerian princes were provided a means to reach expinentially more people faster.

  • A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication, but economically and culturally got in the habit of looking for new and exciting improvements to daily life.

    The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.

    But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.

    • > A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication,

      Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.

  • They are both force multipliers. The issue of course is that technology almost always disproportionately benefits the more intelligent / ruthless.

    • I think the biggest problem with both technologies is how many people seem to think this.

      Crypto was a way that people who think they’re brilliant can engage in gambling.

      AI is a way for “smart” people to create language to make their opinions sound “smarter”

  • I'm not generally anti-capitalist, but what capitalism has become at this point in history means that technology is no longer for helping people or helping society.

    Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.

    • > Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.

      That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.

      You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.

      But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.

    • Yes, at some point mainstream technology turned on the users. So much modern tech seems to be about exerting control or "monetizing" instead of empowering.

    • I am generally anti-capitalist, and a big reason is because I don't think capitalism, inherently and fundamentally, can become anything other than what it is now. The benefit its provided is rarely accurately weighed against the harms, and for people who disproportionately benefit, like most here on HN, it's even harder to see the harms.

      Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it. If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.

      1 reply →

So they run 5 different experiments to test the hypothesis, and they were nothing like what I imagined.

For example, in one study, they divide participants into two groups, have one group watch https://www.youtube.com/watch?v=fn3KWM1kuAw (that highlights the high socio-emotional capabilities of a robot), while the other watches https://www.youtube.com/watch?v=tF4DML7FIWk (that highlights the low socio-emotional capabilities of a robot)

They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.

This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...

  • Do you understand how they chose the two groups? And why show one group one video, and the other group the other video? Shouldn’t both groups be shown the same video, then check whether the group division method had any impact on the results? E.g. if group one was dance lovers and group two were dance haters, you wouldn’t get any data on the haters since they were shown the parkour video instead of the dance video.

    Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"

    • Apparently you do not understand how they chose the two groups. Group identity was not based on a survey or any attribute of the participating individuals.

      Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.

      1 reply →

To the point of the paper, it has been a somewhat disturbing experience to see otherwise affable superiors in the workplace "prompt" their employees in ways that are obviously downstream of their (very frequent) LLM usage.

  • I started noticing this behavior a few months ago and whew. Easy to fix if the individual cares to, but very hard to ignore from the outside.

    Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.

One very new behavior is the dismissal of someone's writing as the work of AI.

It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.

  • Unfortunately it's the correct thing to do. Just like in the past where you shouldn't have believed any stories told on the internet, it's now reasonable to assume any image/text you come across wasn't created by a human, or in the case of images is simply an event that never happened.

    The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.

    • 1. A bot-generated argument is still an argument. I can't make claims about the truth or falsity based on the enunciator, that's simply ad hominem.

      2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.

      I think you can't rationally apply the same standard to these 2 things.

    • As a person with trust issues, I find this adaptation to the change in status-quo quite natural for me.

  • My partner has become tiresome about this - even if I was to tell them that I responded to your comment on HN, they'd go "You probably just responded to a bot".

    Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".

    • I've seen chatgpt output here as comments for sure. In some cases obvious, in other cases borderline. I wouldn't guess that it's a major fraction of comments, but it's there.

How do you guys read through an article this fast after it's submitted? I need more than 1 hr to think this through.

Interesting point — AI can automate tasks, but we need to ensure it doesn’t strip away human judgment and empathy

  • On the opposite side (i.e. the side of what Bender called meatbags), there are a lot of jobs where judgment and empathy are not allowed. E.g. TSA agents examinining babies for bombs in case they're terrorists -- they were told "You must do this to every passenger, no questions asked" and making a decision means deviating from their job description and risking losing it.

As a Black Sabbath fan, I love that they envisioned dystopian stuff like this. Check out their Dehumanizer album.

I'm unwilling to accept the discussion and conclusions of the paper because of the framing of how LLMs work.

> socio-emotional capabilities of autonomous agents

The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/

  • But that's beside the point of the paper. They are talking about how the humans perciving the "socio-emotional capabilities of autonomous agents" change their behavior toward other humans. Whether people get that perception because "LLMs hack our brain" or something else is largely irrelevant.

  • No, I think the thesis is that people perceive falsely that agents are highly human, and as a result assimilate downward toward the agent’s bias and conclusions.

    That is the dehumanization process they are describing.

  • Your socio-emotional capabilities are illusory. They are a product of how craving for social acceptance "hacks" your brain and exploits the hundreds of thousands of years of evolution of our equipment as a social species.

  • The paper literally spells out that this is a perception of the user and that is the root of the impact

    • Perhaps I missed it, could you help me see where specifically the paper acknowledges or asserts that LLMs do not have these capabilities? I see where the paper repeatedly mentions perceptions, but I also see right at the beginning, "Our research reveals that the socio-emotional capabilities of autonomous agents lead individuals to attribute a humanlike mind to these nonhuman entities" [emphasis added], and multiple places in the paper, for example in the section titled "Theoretical Background", subtitle 'Socio-emotional capabilities in autonomous agents increase “humanness”', LLMs are implied to have at least low levels of these capabilities, and contrasts it to the perception that they have high levels.

      In brief, the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities, and that the problem is humans perceiving them as having higher levels.

      2 replies →