← Back to context

Comment by cjs_ac

15 hours ago

At some point, a publicly-listed company will go bankrupt due to some catastrophic AI-induced fuck-up. This is a massive reputational risk for AI platforms, because ego-defensive behaviour guarantees that the people involved will make as much noise as they can about how it's all the AI's fault.

That will never happen, AI cannot be allowed to fail, so we'll be paying for that AI bail-out.

Do you really want these kind of companies to succeed? Let them burn tbh

  • I don't find comments along the lines of 'those people over there are bad' to be interesting, especially when I agree with them. My comment is about why it'll go wrong for them.

I see the inverse of that happening: every critical decision will incorporate AI somehow. If the decision was good, the leadership takes credit. If something terrible happens, blame it on the AI. I think it's the part no one is saying out loud. That AI may not do a damn useful thing, but it can be a free insurance policy or surrogate to throw under the bus when SHTF.

  • This works at most one time. If you show up to every board meeting and blame AI, you’re going to get fired.

    This is true if you blame a bad vendor, or something you don’t even control like the weather. Your job is to deliver. If bad weather is the new norm, you better figure out how to build circus tents so you can do construction in the rain. If your AI call center is failing, you better hire 20 people to answer phones.