← Back to context

Comment by eschneider

1 year ago

Do a search for 'UnitedHealth uses faulty AI to deny elderly patients' and see what comes up.

I'm not saying this is deserved, more that I'm surprised it's taken this long for someone to just up and execute an insurance CEO.

I read the title as "finally shot" before my morning coffee. ("Finally", not "fatally".)

Well it’s very unfortunate for all Americans that rely on any healthcare company.

Since they will all be increasing their rates soon to cover the expenses of 24/7 armed guards, armored vehicles, etc…

Edit: Or a reduction in service quality to cover the new expenses.

  • As someone who endured UHC for several years, I assure you there is no service quality to reduce.

    > How do they still have that many customers if their service was so bad?

    Because the people making the decision to purchase UHC services and using UHC services are two wildly different demographics.

[flagged]

  • No, but this is a case where incorrect AI decisions may legitimately contribute to people's deaths. Let's also not normalize the idea that it's OK for people to die so one of the most profitable companies in the world can make even more money.

  • I'm not saying that you're wrong, or that I feel that killing is OK, but as others have expressed throughout this thread, it feels weird to say stuff like that in regards to a company that itself has normalized letting people die for the sake of higher profits.

  • Or you could say let's not normalize broken AI in health care related systems, but potato potato I guess.

    • AI in health care, (and pretty much any sector that could kill someone), should be strictly regulated.

  • This is not even remotely close to the "big issue" with UHG. There's probably no individual company that's responsible for more dysfunction in the American health system than UHG.

    Not that I think it justifies murdering the CEO, but also such is the nature of systematic violation of massive numbers of people's sense of justice.

  • I think we can be pretty confident that he wasn't shot because an AI product wasn't accurate.

  • I doubt anyone on HN would have any interest in normalizing that practice. But almost everyone who will be wronged by these systems are going to be up in arms.

    I think we'll see a lot more of this sort of thing in the future. Your car killed my mother and the law said it was fine. Your insurance company denied my grandma's claim and she died in agony after paying premiums for 30 years.

    Let's just hope that autonomous drones don't become trivially capable as weapons. At that point, everyone from the President and your local police chief to the chairman of Bank of America and the local ambulance chasing lawyer who sued the wrong guy's mom after she hit someone in a car accident would be in a very bad situation.

    We should really try to get this under control before it gets out of hand.

    • Serious question, why exactly should we work so hard to insulate the ultra-rich from the consequences of their actions? They are already virtually completely insulated, whereas everybody else is held to the standard which should be reserved for those with the education to know better and the resources to do better.

  • Nobody was killed because an AI product was inaccurate. If this was indeed the reason, this CEO was killed for killing someone's family member by denying them healthcare.

    You're not expected to have a faultless AI but you're expected to supervise it, to have an appeal process, and to make things right when AI makes mistakes. In other words, this is a "high risk system" under the EU AI Act, which should have appropriate safeguards in place.

    > Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

    https://artificialintelligenceact.eu/article/14/

  • I don't think the parent poster is doing that, I think they are pointing out that when the ai products are faulty and result in the predictable deaths or suffering of people, someone out there might get angry enough to make bad choices.

Per Reddit: "One is a high powered assassin whose livelihood depends on his ability to rationalize beyond emotion to calculate the cost of a life. The other guy is still alive."

Edit: I would love to make $20/minute every day finding ways to drive people into medical bankruptcy, despair, and death, just like him, because being rich is awesome. :)