Police used AI facial recognition to wrongly arrest TN woman for crimes in ND

1 month ago (cnn.com)

Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.

Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".

But this is the worst part of the story:

> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”

That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.

[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/

[1]: https://news.ycombinator.com/item?id=23628394

  • The thing about the legal system is there's no incentive to investigate to find the truth.

    The incentive is to prosecte and prove the charges.

    Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.

    The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.

    • There’s a judge down in Texas, Dallas area I believe, who is in social media a lot because he will excoriate prosecutors who bring bs in to his court room. He’s not soft on crime but hard on rights and process. If a defendant did the wrong thing, he will have the appropriate amount of sympathy, down to zero. At times he will tell them, we all know you got lucky here, do better. But he won’t let prosecutors slate by on garbage charges or statements or investigations by police. Which leads to my primary point at least for this discussion in particular:

      To me the scariest part of this as a process is how many times (I’d casually estimate at least 75%) it is blindingly obvious that the prosecutor has not read the statement of charges or officer statements until everyone is in front of the judge. I get on one hand this judge seems to often be handling probable cause hearings but so many of these should never have resulted in any paperwork being turned in to the prosecution, let alone anyone having to show up in court.

      3 replies →

    • There needs to be consequences for shitty, procedure-ignoring police work. Period.

      Minimum 1 year of jail time for grossly wrongful arrests that could be avoided with standard procedure or investigation tactics that were not applied.

      14 replies →

    • > The thing about the legal system is there's no incentive to investigate to find the truth.

      The truth is much more complicated and involves politics. For example Seattle (and possibly other cities?) enacted a law that involves paying damages for being wrong in the event of bringing certain types of charges. But that has resulted in some widely publicized examples where the prosecutor erred by being overly cautious.

      9 replies →

    • > The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.

      I don't get it, if they only care about prosecuting and proving the case, wouldn't they go by the bodycam evidence? They didn't prove the case. Maybe if their incentive was to prosecute and prove the charges, they'd go by the obvious evidence. Or am I missing something here?

      1 reply →

    • There is an incentive . It’s called fraud by negligence. I’m hoping she sues everyone here.

      That’s seems to be in the realm of poissibility here if I am understanding things correctly (imo)

  • Society went through the necessary lessons with DNA and fingerprints. Putting people in jail because the computer produce a match is a terrible idea, especially when its done by an proprietary dark box that no one really understand why it claims there is a match. It can be used as a tool of investigations to give the investigators an hint to find real more substantial clues, but using it like in fiction where the computer can act as the single truth is terrible for society and justice.

    A month ago or so people on HN discussed facial recognition when looking victims and perpetrators in child exploitation material, and people were complaining that meta did not allow this fast enough. Neither the article or the people in that discussion draw any connection that the issues in this article could happen. People seemingly want to think that the lesson is "Never go back to North Dakota", as that is a much easier lesson than considering false positives in detection algorithms and their impact on a legal system that is constrained in budget, time, training and incentives.

  • Yes, of course someone should have investigated, but the larger point here is that people don’t because they are being sold a false narrative that AI is infallible and can do anything.

    We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.

    • I don't think people on HN think "AI is infallible", I think people on HN believe HN is sufficient enough for "most tasks". In the context of HN "most tasks" refers to programming tasks, not arresting and jailing people tasks.

      You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.

      2 replies →

    • Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.

      To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.

      The models all have disclaimers that state the inverse. People just gradually lose sight of that.

      [1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.

      3 replies →

  • I think you missed many important points.

    "The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.

    That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."

    This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?

    • It was worse than that, the reporting from an earlier story[0]

        ...Unable to pay her bills from jail, she lost her home, her car and even her dog.
      

      There is not a jury in the country that will side against the woman. I am not even sure who will make the best pop culture mashup - John Wick or a country song writer?

      (Also, what happened to journalism - no Oxford comma?)

      [0] https://news.ycombinator.com/item?id=47356968

      15 replies →

    • The real problem here is she'll get money, who knows how much, but that ultimately does nothing to actually address the problems in the system.

      Effectively it just raises taxes to cover the cost of these failed prosecutions.

      Everytime one of these cases happens, a cop and a prosecutor should be out of a job permanently. Possibly even jailed. The false arrest should lose the cop their job and get them blacklisted, the prosecution should lose the prosecutor's right to practice law.

      And if the police union doesn't like that and decides to strike, every one of those cops should simply be fired. Much like we did to the ATC. We'd be better off hiring untrained civilians as cops than to keep propping up this system of warrior cops abusing the citizens.

      7 replies →

  • > Whether it's AI that flagged her

    It absolutely was. There's no question of this. Now we need to ask how was the system marketed, what did the police pay for it, how were they trained to use it?

    > anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm.

    Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.

    > we are all guilty until cleared.

    This is not at a phenomenon that started with AI. If you scratch the surface, even slightly, you'll find that this is a common strategy used against defendants who are perceived as not being financially or logistically capable of defending themselves.

    We have a private prison industry. The line between these two outcomes is very short.

    • > Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.

      I just want to understand your argument: you believe that any alibi provided is hearsay, and has no legal value, and that they can't even take the statement in order to validate it? That's your position?

      3 replies →

    • >Legally that amounts "hearsay" and cannot have any value.

      How is that hearsay if she's directly testifying to her own whereabouts?

      Hearsay would be if someone else was testifying "she was in X location on july 10th between 3 and 4pm", without the accused being available for cross

      2 replies →

  • >No, challenge the entire system.

    Agree in principle. But people like her does not have the resources, financially and emotionally to go through the legal system again. Unless there are charitable lawyers who are willing to do it on her behalf for free.

  • Clearview again. ICE is using it too, and their people think it is an oracle that is always correct, so that when someone shows a passport card or a RealID showing that they are someone else, a US citizen or permanent resident, they are usually accused of having a fake ID. It's a flawed tool and it misidentifies people sometimes.

  • IANAL but AFAIK custodial interrogation triggers Miranda, lawyers, and those awful awful civil liberties we’re trying to get rid of.

    Better just to apply Musk or Altman software to the problem and avoid it entirely.

The vendor they used, Clearview AI, does not allow you to request data deletion unless you live in one of the half-dozen states that legally mandate it.

https://www.clearview.ai/privacy-and-requests

I have suddenly becomes very interested in New York's S1422 Biometric Privacy Act.

For me the worst thing in this case is that a JUDGE signed off on an arrest warrant with only a clearview match linking Ms Lipps to the crime.

A judge and the warrant process are supposed to be the safeguard against police doing shady stuff (like relying on an AI hit to decide who commit a crime). But if the judges can't be bothered...

This is a weak or misleading story about AI.

First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.

Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.

The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?

Can someone clarify how that process works?

  • In the US there are no consequences for people in power failing to follow procedures, laws or regulations - except for being told to stop doing whatever illegal thing they're doing, and possibly getting sued way down the line, which gets paid by taxpayers.

    • From reading more into the case, it seems the issue may be related to how her lawyer handled the case.

      They probably did “identity challenge” arguing that she is not the right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.

      Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.

      12 replies →

  • > The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?

    As the article gestures towards, challenging the extradition can greatly extend the timeline, from 30 days after the arrest to 90 days after a formal identity hearing. Which isn't fair and isn't intuitive, but is unfortunately a long-standing part of the system. (Even worse, this kind of mistaken identity can't be challenged in an extradition hearing; the question isn't whether she's the person who committed the crime but whether she's the person identified in the warrant.)

    • That is my assumption. I assume who ever was representing her made a mistake and challenged the warrant and that caused delay in the extradition.

  • > It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.

    This is how it should work, but I still think it is important to discuss these failures in the context of AI risks.

    One of the largest real-world dangers of AI (as we define that now) is that it is often confidently wrong and this is a terrible situation when it comes to human factors.

    A lot of people are wired in such a way that perceived confidence hacks right through their amygdala and they immediately default to trust, no matter how unwarranted.

  • I wish I could find the link, but I believe she was in jail on parole violation, unrelated to anything that the "AI" flagged her on.

    • Her picture was used as part of a fake id card, in the commission of a crime. The fuzzy camera footage looked like her (from stills I've seen) and her picture was on the fake ID. Those 2 circumstantial items were, apparently, enough to have a warrant issued.

      They picked her up in TN and held her for 4 months, even after:

      The ND police knew the ID was fake and the person using it was not her. The ND police knew she had been in TN before, during, and after the crime.

      She is still technically a suspect, even after all of this has come out.

      3 replies →

    • That is the first I have heard of that. A small unexplained blurb in this article. Already in jail on parole violation..

      Maybe she objected to the extradition order without good counsel.

      "I aint never been to N.Dakota". She found out the hard way how the law works..

      What about the banks being hit. Surely they have good cameras. This was bad mojo. I would think a Wells Fargo/BoA has a unit for this stuff.

      Finincial crimes handled like this. The banks will be sued too I suspect.. Deep pockets settle out.

This isn't the first time this month I've read about someone suffering consequences of mistaken identity after their facial recognition said they look like someone who committed a crime. I'm sure this is starting to happen at an alarming rate.

The fundamental problem is that among the 350 million people living in the United States, there are a lot of pairs of people who look pretty darn similar. It used to be impractical to ask a question like "who in the US looks like the person in this security footage", and so as a matter of practicality, once you found someone who looks like the suspect, you probably also have other evidence, even if it's pretty weak, linking them to the crime.

But with AI, you can ask "who in the US looks like this person", and so we need to re-calibrate what it means if all you know is that someone looks like a suspect. I am of the opinion that "looks like someone," in the absence of any other evidence, is reasonable suspicion, but not probable cause, that you are the person you look like. Reasonable suspicion is enough for the police to stop you on the street and ask for your ID, but not enough to arrest you. There are other data points that alone might not even be reasonable suspicion, but could be combined with "looks like someone" to make probable cause, such as "was near the place at the time the crime happened".

AI isn't really the problem, even whether or not the AI's determination that two people look alike is valid or reviewed by a human isn't the problem. The problem is assuming that because two people look alike they must be the same person, even if you have no other evidence of them being the same person.

Money quote from someone quoted in the article:

"[I]t’s not just a technology problem, it’s a technology and people problem."

I can't. I just can't.

  • I've been hearing "it's not just... it's a" touted as an AI sign recently, personally I think it's an AI sign because it's a human thinking shortcut sign, and AI copies it, but it would be funny if AI wrote the article and then hallucinated this specific money quote.

    • I doubt this happened here, but FWIW, AI does have a habit of "cleaning up" (read: hallucinating) interview transcript quotes if you ask it to go through a transcript and pull quotes. You have to prompt AI very specifically to get it to not "clean up" the quotes when you ask it to do that task.

      2 replies →

The actual scariest part isn't that the AI got it wrong... it's that nobody felt the need to verify the AI. A tip from an anonymous caller can get investigated and found out if its true or not, and a match from a facial recognition system apparently does not. People haven't built better investigative tools they've just built better ways to skip around the investigation.

Wow thought the bar for probable cause for an arrest warrant would be much higher. Especially to drag soneone from another state.

  • It’s a classic example of the base rate fallacy. The judge sees that a system with a seemingly high accuracy rate (like 99.999% accurate) has flagged a person, and they assume that means the person is highly likely to be guilty.

    However, the system uses a dragnet approach, and is checking against millions of people. If you are checking 300 million people, that 99.999% accuracy check is going to find 3,000 people, and AT LEAST 99.96% of those people are going to be innocent.

    This is why we can’t have wide, automated surveillance.

So cops used AI to attempt to investigate a crime. But, there was no crime - the arrest was wrong. Why can cops excuse themselves here for delegating their responsibilities (protecting society, allegedly that is) onto software? AI may also be written by some corporations to "tweak" this or that, see this foreign-looking guy being more likely to be AI-investigated. This is like the movie Minority Report - but stupid. IMO the courts should conclude that cops should not be allowed to use AI without having a prior, independently verified objective reasoning for any investigation. This mass sniffing that is currently going on is very clearly illegal. The current orange guy does not care about the law; see flock cameras aka spy cameras employed by the government on all car drivers at all times.

She should sue the city controlling that police department, into oblivion. Or at least to the absolute max she can get.

a single data point (facial match) would never pass a security assessment. AI-assisted decisions need verification chains.

AI is a liability issue waiting to happen. And this is just another example.

  • It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.

    • There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.

      The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.

      6 replies →

    • This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.

      2 replies →

    • Used incorrectly will lead to errors.

      Only one small little problem --- there is no way to tell if you are using it "correctly".

      The only way to be sure is to not use it.

      Using it basically boils down to, "Do you feel lucky?".

      The Fargo police didn't get lucky in this case. And now the liability kicks in.

      15 replies →

    • What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.

      Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.

      Otherwise, I'd say it's an extremely lazy argument

    • AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.

    • Unlike hammers people preface things with "claude says", etc. I never see that kind of distancing with tools that aren't AI.

  • It occurs to me that software engineering is just about the only engineering field which is neither licensed nor bonded nor insured.

    I wonder if AI / shadow IT will change that.

    • I wonder if AI / shadow IT will change that.

      I doubt it.

      Computing has traditionally been all about math and logic. This is really all that a binary logic computer is capable of. When applied to this purpose, it can offer highly accurate results at very low cost.

      Current AI is an attempt to branch out from simply calculating into decision making. But it does so in the worst possible way --- using probability and statistics (aka guesswork) instead of logic and reasoning. In other words, AI offers questionable results at high cost.

      As this article shows, relying on guesswork is a legal liability issue waiting to happen in many (if not most) operating environments.

      2 replies →

[flagged]

  • I would say much more likely that it was because she was poor and couldn't afford a good lawyer.

    • This, she likely had a shitty public defender that did the bare minimum requirements because they were catering to paying clients. The state was playing hardball because they wanted to make a profit off the poor person with a shitty defense and the public defender was sitting on the bench at a teeball tournament because they werent getting paid enough and didn't want to try.

  • What? Women are much more sympathetic figures when it comes to crime and punishment. And there are 10x more men in prison in america than women. If you were trying to "introduce" some nefarious law enforcement system to the US you would use it on undesirable men first (drug addicts and gang members)

  • You think they deliberately chose to do this to a woman? Why?

    • Probably just reading the room, with States like texas making abortions illegal and allowing random citizens from enforcing that.

      Famously, abortions are a woman thing.

      Anyway, looking through the facts, it's just some random woman. There's better evidence that these facial recognition systems are much worse at minorities rather than genders.

      Interesting biases are own-gendeR: https://pmc.ncbi.nlm.nih.gov/articles/PMC11841357/

      Racial bias:

      https://mitsloan.mit.edu/ideas-made-to-matter/unmasking-bias...

      Miss rates:

      https://par.nsf.gov/servlets/purl/10358566

      Although you can probably interpret the facts differently, we've seen how any search function gets enshittified: Once people get used to searching for things, they tend to select something that returns results vs something that fails to return results.

      Rather than the user blaming themselves, they blame the searcher. As such, any search system overtime will bias towards returning search (eg, Outlook), rather than accuracy.

      So if these systems easily miss certain classes of people, women, minorities, they'll more likely be surfaced as inaccurate matches rather than men who'll have a higher confidence of being screened out.

      That's how I interpret this 2 second commment.

[flagged]

  • Has it not been fairly common to require police officers to have a bachelor’s degree? Or an associate’s? I think recently that has been relaxed but I’ve lived in places where it was absolutely a requirement.

    I don’t think they’re as stupid as you suggest.

    • Police departments are known to avoid hiring people that get high marks in school, under the principle that such individuals will become bored with the job and quit. They literally look for average people with average intelligence: C students.

      Now factor in the slow decline of our educational institutions, where grade inflation has systematically diminished the credibility of a degree. I would wager that many C students today would have failed out completely 30 years ago.

      In that light, it is not surprising that people are seeing ICE agents behave like brown shirts. No one in power wants those people asking any kind of hard questions about what they are being ordered to do.