Comment by albert_e

15 days ago

On the other hand ... I recently had to deal with official Microsoft Support for an Azure service degradation / silent failure.

Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame.

> The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side.

Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.

The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.

Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?

  • Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me.

    We need a law that forces management to be regularly exposed to their own customer service.

    • I knew someone would respond with this. HN is rampant with this sort of contrarian defeatism, and I just responded the other day to a nearly identical comment on a different topic, so:

      No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt.

      > At least the computer isn’t being forced to lie to me.

      Isn’t it though?

      > We need a law that forces management to be regularly exposed to their own customer service.

      Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit.

      13 replies →

  • > Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?

    The company can't have it both ways. Either they have to admit the ai "support" is bollocks, or they are culpable. Either way they are in the wrong.