Comment by 0xC0ncord
9 days ago
>Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”
To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!
No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.
Yes, it's a big problem. I call it "agency laundering" and I first mentioned it in this article last year: https://arstechnica.com/information-technology/2025/08/is-ai...
Treating AI models as autonomous minds lets companies shift responsibility for tech failures.
Wait until your local police force has fully autonomous lethal robots on the streets.
This one isn't actually inevitable in the near term. Lethal robots policing the streets isn't something that can just sneak up on us[0] - it's a pretty clear-cut civic issue affecting everyone, so excepting hardcore autocracies with no vertical accountability[1], the public can push such ideas back indefinitely[2].
It's hard to "agency launder" a killer robot when it's physically patrolling a public square.
--
[0] - Except maybe through privatization of law enforcement, which could be more gradual - think police outsourcing more work to private security companies, which in turn decide to "pioneer innovative solutions to ensure personal safety" by giving weapons to mall security patrol robots and putting them out on the streets - but it'll still be pretty obvious what's happening.
[1] - Some cursory search suggests this is the correct term for the idea I'm thinking of, which is how much the people in power have to, in practice, take their subjects' reactions into account.
[2] - Well, at least until armed forces of multiple countries start using autonomous robots as ground infantry, and over the years, normalize this idea in the minds of civilians.
> No one is admitting it but AI is one of the easiest ways to shift blame.
Similar to what Facebook, Google, Twitter/X, Tiktok etc have been doing for a long time using the platform-excuse. "We are just a platform. We are not to blame for all this illegal or repugnant content. We do not have resources to remove it."
There's a book "The Unaccountability Machine" that HN may be interested in. Takes a much broader approach across management systems.
That famous Bible verse, "there is nothing new under the sun", comes to mind. Even most of the problems with computers and computer systems - especially distributed ones - and information processing, and all problems at the interface layer between those systems and people, are something we've already been dealing with for hundreds of years. For many of those we even developed effective solutions, that most people don't realize exist.
It takes a little frame shift to see this: one has to realize that bureaucracy is a computing system, built on a runtime made of people instead of silicon, storing data on forms and documents, invoking procedure calls through paper shuffling, executing programs written in legalese, as rules and procedures and laws.
Accountability shifting? "The program won't let me do that" is just a new, more intense flavor of "this is the company/government policy". The underlying goals remain the same - building a reliable system from unreliable parts, a system to realize some goals - while maintaining control of and visibility into it, all without having to personally micromanage every aspect. Introductions of computers into bureaucracy didn't change its fundamental nature; making process more robust and reducing endpoint variation (i.e. individual autonomy of the workers) just makes it scale better.
Hell, even AI - at least at this point[0] - isn't really a new thing either. Once you allow yourself to anthropomorphize LLMs a bit and realize they are effectively "People on a Chip", it becomes clear what their role in a computing system is, and that we already have experience dealing with their flaky, unreliable nature.
And from that perspective, it's clear as day that company blaming AI for a fuckup is just the most recent flavor of shifting blame to a subcontractor.
--
[0] - Things will meaningfully change if and when we get to the point of AIs being given moral or legal status as people. Though in all honesty, this wouldn't be a completely new situation either - more like a new take on social and political issues humanity has been dealing with ever since first two ancient tribes found themselves contesting the same piece of land.
It sounds like in this case there was some troll-fueled comeuppance.
> “We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”
> Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”
This might just be BS, but at face-value, this is a mom and pop shop that screwed up playing the SEO game and are getting raked over the internet coals.
Your broader point about blame-washing stands though.
That's the thing about scammers, they operate in plausibly deniable ways, like covering up malice with incompetence. They make taking things at face value increasingly costly for the aggrieved.
No, this is earned. They chose to do this, to publish lies, and have to live with the consequences.
Commercial enterprises seem designed to launder responsibility, this is perhaps the ultimate version of that system.
It doesn't always work out, though
https://www.theguardian.com/world/2024/feb/16/air-canada-cha...
I somewhat disagree, because at the end of the day he still has to take responsibility for the fuckup and that will matter in terms of dollars and reputation. I think this is also why a lot of roles just won't speed up that much, the bottleneck will be verification of outputs because it is still the human's job on the line.
An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.
> dollars and reputation
You're not already numb to data breaches and token $0.72 class action payouts that require additional paperwork to claim?
In this article, these people did zero confirmatory diligence and got an afternoon side trip out of it. There are worse outcomes.
> if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong?
He was likely the one who ordered the use of the AI. He won't fire you for mistakes in using it because it's a step on the path towards obsoleting your position altogether or replacing you with fungible minimum wage labor to babysit the AI. These mistakes are an investment in that process.
He doesn't have to worry about consequences in the short term because all the other companies are making the same mistakes and customers are accepting the slop labor because they have no choice.
he forgot model market fit https://www.nicolasbustamante.com/p/model-market-fit
I hope that this will result in people paying a premium for human curation and accountability, but I won't hold my breath.
I imagine it's already happening, but not at price points most of us would ever afford.
I.e. I'm not really going to pay lots of money to, say, 1) find a doctor that does not use AI as part of their work, and 2) legally/contractually enforce this is the case. However, I can imagine a government agency or a large company contracting out to some think tank or research organization, and paying through the nose to get a legally binding guarantee that no AI will be used as part of that work.