← Back to context

Comment by 0xC0ncord

4 hours ago

>Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”

To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!

No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.

> No one is admitting it but AI is one of the easiest ways to shift blame.

Similar to what Facebook, Google, Twitter/X, Tiktok etc have been doing for a long time using the platform-excuse. "We are just a platform. We are not to blame for all this illegal or repugnant content. We do not have resources to remove it."

There's a book "The Unaccountability Machine" that HN may be interested in. Takes a much broader approach across management systems.

It sounds like in this case there was some troll-fueled comeuppance.

> “We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”

> Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”

This might just be BS, but at face-value, this is a mom and pop shop that screwed up playing the SEO game and are getting raked over the internet coals.

Your broader point about blame-washing stands though.

  • That's the thing about scammers, they operate in plausibly deniable ways, like covering up malice with incompetence. They make taking things at face value increasingly costly for the aggrieved.

  • No, this is earned. They chose to do this, to publish lies, and have to live with the consequences.

Commercial enterprises seem designed to launder responsibility, this is perhaps the ultimate version of that system.

I somewhat disagree, because at the end of the day he still has to take responsibility for the fuckup and that will matter in terms of dollars and reputation. I think this is also why a lot of roles just won't speed up that much, the bottleneck will be verification of outputs because it is still the human's job on the line.

An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.

  • > dollars and reputation

    You're not already numb to data breaches and token $0.72 class action payouts that require additional paperwork to claim?

    In this article, these people did zero confirmatory diligence and got an afternoon side trip out of it. There are worse outcomes.

  • > if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong?

    He was likely the one who ordered the use of the AI. He won't fire you for mistakes in using it because it's a step on the path towards obsoleting your position altogether or replacing you with fungible minimum wage labor to babysit the AI. These mistakes are an investment in that process.

    He doesn't have to worry about consequences in the short term because all the other companies are making the same mistakes and customers are accepting the slop labor because they have no choice.

I hope that this will result in people paying a premium for human curation and accountability, but I won't hold my breath.