← Back to context

Comment by rglover

18 days ago

> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.

Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."

Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").

It's only recently some have come to terms with the fact that DNA evidence sometimes returns false positives. Society, and law enforcement, assumed that DNA was infallible. No one apparently wondered millions of people could be reduced to a tiny number of genetic markers apparently having no overlap.

Danish police had to redo 20.000 DNA tests with a larger set of markeres begin tested, because they jailed someone based solely on a DNA test and did consider that they might have gotten the wrong person, despite the DNA match. It's essentially a human hash collision.

Identification by AI is going to be the same, except worse, because it's frankly less scientific. Law enforcement, the judicial system and especially the public is simply to uninterested in learning the limitations of these types of systems. Even in the more civilized part of the world police would love to just have the computer tell them who to pick up and where.

not the first instance.

This was 2023 https://www.youtube.com/watch?v=lPUBXN2Fd_E&t=19s

A dude in the usa was arrested in a casino by police because the casino's facial recognition software said he had been trespassed before. He hadn't. I think there was height differences and eye colour difference. The police still arrested him, booked him. I think the prosecutors took it to trial.

I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%.

Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.

I feel like I'm going crazy with this narrative.

  • > I feel like I'm going crazy with this narrative.

    We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.

    What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.

    They're going to fall for it, without a second thought.

    And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.

  • When you foster a culture of impunity and passing the buck, don't be surprised when they pass the buck to the inscrutable black box they bought.

    You might even argue that's the purpose of the inscrutable black box.

  • The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology?

    • AI facial recognition is smarter than what they are capable of. That's not the issue. It is much faster than a human, and state-of-the-art models make fewer errors than a human (though the types of errors are not the same).

      The issue is that facial recognition is just not very reliable. Not for humans and not for machines. If you look at millions of people, some of them just look incredibly similar. Yet police apparently thought that was all the evidence they will ever need. A case so watertight there's no point in even talking to the suspect

      1 reply →

    • > The “I” in “AI” stands for “intelligence”

      By that logic the “I” in Siri is 2x more intelligent.

    • Because they are supposed to possess minimum levels of intelligence found in homo sapiens, which includes not believing anything a salesperson says.

      Also, their whole job is dealing with people who constantly lie to them.

      8 replies →

  • As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm.

    • all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.

      7 replies →

  • You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition.

    But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?

    I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.

    • Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.

      4 replies →

  • I feel like I'm going crazy that anyone tries to suggest the AI and the producers and promulgators and apologists of AI played no part and bear none of the responsibility in this narrative.

    • Because the responsibility lies on the part of the criminal justice system who used the flimsy AI facial recognition evidence to arrest and hold her for months. If AI didn't exist, and this same incident happened because a human looked at a photograph of the woman and said "I think this might be the same person who committed the crime in the video", it would be insane to blame the people who invented photographs or video recording for her arrest.

      1 reply →

  • You are exactly correct. Cops cannot be trusted. We spent a lot of time pointing that out in 2020. AI is the least of our problems with policing.

    Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020.

    • Are we just gonna pretend the wide implementation of bodycams hasn't shown that the overwhelming majority of times the cops weren't in the wrong to a point that the same people that demanded them want them gone now?

      1 reply →

  • But it's not totally irrelevant in this story.

    Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results.

    What or who is at fault isn't either/or, it's a bunch of compounding factors.

  • You’re on the right track here but I don’t think it should be hand-waved away as “the least of your problems” - it’s yet another weapon that police in the USA can use against the population with impunity. They’re going to have to reckon with all of this in the coming years - cops having guns and armored cars, “qualified immunity”, the “stop resisting” workaround for brutality and now this AI

  • > Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

    It's absolutely absurd. The argument that AI is the problem is literally the people arguing against AI shedding responsibility to the machines. The people arguing that AI is the problem are essentially (philosophically) the same people who will say it was the AIs fault.

    The thing that it most reminds me of is people trying to stop the deaths and injuries that come as a result of "swatting" by being really angry at people who "swat" and proposing the harshest punishment for it that they can come up with (or outdoes anyone else in the thread.)

    The problem with swatting is that police were showing up to the houses of harmless people based on anonymous phone tips and murdering them. You guarantee swatting will work indefinitely when you indemnify the cops.

    You don't need AI for injustice in the US justice system. There is literally no part of the US justice system that makes sense at all, and even in the best case scenario when the guilty are caught, tried, and punished, it is tremendously wasteful, cruel, and ass-backwards. Juries are basically the AI of the US justice system, allowing the prosecutorial and enforcement apparatus to be infinitely cruel, illogical, self-serving and incompetent. 12xFull AGI. AI couldn't do any worse.

    > I feel like I'm going crazy with this narrative.

    You're not alone.

  • You’re going crazy because up until this exact moment you’ve never had to confront the reality that these tools, placed into the hands of the common man, are viewed as authoritative and lack any accountability or consequence for misuse.

    For anyone who has been victimized by law enforcement or governments before, we’ve been warning about this shit for decades. About the lack of consequence for police brutality. The lack of consequence for LPR abuse. The lack of consequence for facial recognition failures and AI mismatches.

    You need to understand that by using these systems correctly and holding yourself accountable, you are in the minority. Most people do not think that critically, and are all too happy to finger the computer when things go badly.

    And until you accept that, and work to actually hold folks accountable instead of deflecting blame away from the tool, then this won’t actually change.

  • It's called qualified immunity. Many support its repeal. I hope you join them, and convey the same to your local representatives and candidates. Until it is reformed few if any officers or administrators of criminal justice in the United States will ever feel any type of accountability.

    Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. Criminal charges against officers are exceedingly rare. She should be able to sue this detective directly. Of course she can sue the government too, and should. But without any personal consequences for the people carrying out these acts, taxpayers will continue to bail out these practices without ever noticing. Your own government should not be a shield for a police officer who has violated you or your neighbors.

    • > Many support its repeal.

      There's nothing to repeal. Qualified immunity is a doctrine that the judicial branch made up out of thin air, with no legislative backing.

      But agreed, we need legislatures to write laws that expressly hold police accountable, and declare that they are not shielded from liability when things go wrong due to their own failures and negligence.

      1 reply →

    • > Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice.

      And frequently not even then.

  • You can hold someone responsible only after they've actually fucked up. And with the way things move in the criminal justice system, that can take months to discover. Holding them responsible doesn't really fix anything, it's purely reactive.

  • Dude, not sure which team are you working in, but across many-many domains - corporate, business and political, people are already delegating full decision making and responsibility to AI. Unless national governments and standards institutions create and enforce ironclad AI governance laws, situations resembling what this poor granny went through are going to occur again, again and again.

    • There’s money to be made selling AI plausible deniability machines that allow end users to enact unethical policies while evading accountability, but only if all moral responsibility ostensibly falls on the end user and none on the dealer.

  • I mean, this is the USA we're talking about. Cops are given huge authority over everyone else, with poor accountability. AI just lets them pretend to be even less accountable. And by "pretend" I of course mean "get away with it".

  • See, AI was used to accelerate arrest and jailing, but not to follow through. It was not used to ensure her well being. Clearly this demonstrates that AI contributes to treating humans inhumanely, and demonstrably AI is not used to improve anyones quality of life. Stop making excuses for "AI not at fault here".

It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it.

  • It's also why people start parroting the phrase "the purpose of a system is what it does". Look at where we are right now: a precipice before this becomes widely used in all forms of policing. We still have a chance to police the police's use of the AI.

    The purpose of using AI to identify suspects in criminal cases is to ease the burden of manual searching for a suspect (or insert whatever the purpose of statement you want). Ok, but we're getting false positives that are damaging people's lives already in the early stages. And I don't want to hear "trust me bro, it will get more accurate" as an excuse to not regulate it.

    At a minimum, we should enshrine the right to appeal AI and have limits on how it can be used for probable cause.

    This isn't even the only recent case of this happening. There was another case of mistaken identity due to AI. [0] Sure 4 hours isn't the same as 5 months, but still this guy wanted to show multiple forms of ID to prove who he was! The bodycam footage was posted a few months back but never got traction here.

    Like if the police officer can't read numbers, they can't do breathalyzer tests on people. If the AI can't be used responsibly, then it can't be used at all.

    [0]: https://www.youtube.com/watch?v=lPUBXN2Fd_E

So what? There were false arrests and convictions made by misuse of line-ups, DNA, eye-witnesses, photos, bloodstains, fingerprints, etc. since forever. You must also blame all those other technologies, so what do you think the police should use to find suspects? In your view, the more help police have, the worse a job they'll do. Is that actually the trend?

  • With all other proof you mentioned, there was always a human putting his signature.

    Now that they can blame "AI" no specific officer(s) will take the blame, ever. If no one is responsible there will be many more false positives.

    And false positives destroy lives

    • > With all other proof you mentioned, there was always a human putting his signature.

      There was a human doing that in this case; AI doesn’t inititiate charges. “In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.”

      2 replies →

  • So what???

    This woman lost most of her material possessions, was terrorised by "goons"... The police do this stuff regularly, as black people, immigrants, "white trash" etcetera know well. Another opportunity, presented BY AI models for more routine police oppression

    As the wise singer said: "Fuck the police!"

    • Exactly, it's the police's fault, as well as the wider system they operate in that enables that kind of abuse, and they do it anyway even with out AI.

      6 replies →