Comment by jqpabc123

21 hours ago

AI is a liability issue waiting to happen. And this is just another example.

It's the opposite, it's absolution from liability. "The AI did it" is the ultimate excuse to avoid accepting responsibility and consequences.

It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.

  • There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.

    The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.

    • When somebody uses a tool to hurt somebody, they need to be held accountable. If I smack you with a hammer, that needs to be prosecuted. Using AI is no different.

      The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.

      2 replies →

  • This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.

    • The tool is unethical, not broken. And unfortunately remains legal for the time being. To that end it's a social or political problem that can be fixed.

    • The tool, like Google search, is likely biased towards returning results regardless of confidence.

  • Used incorrectly will lead to errors.

    Only one small little problem --- there is no way to tell if you are using it "correctly".

    The only way to be sure is to not use it.

    Using it basically boils down to, "Do you feel lucky?".

    The Fargo police didn't get lucky in this case. And now the liability kicks in.

    • Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.

      7 replies →

    • Look, I'm generally considered AI's most vociferous detractor.

      But...

      > there is no way to tell if you are using it "correctly".

      This simply isn't true, at least in cases like this.

      I know common sense isn't really all that common, but why would you give more credence to an untested tool than an untested crack-addled human informant?

      The entire point of the informant, or the AI in this instance, is to generate leads. Which subsequently need to be checked.

      3 replies →

  • What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.

    Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.

    Otherwise, I'd say it's an extremely lazy argument

  • AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.

  • Unlike hammers people preface things with "claude says", etc. I never see that kind of distancing with tools that aren't AI.