Innocent woman jailed after being misidentified using AI facial recognition

17 days ago (grandforksherald.com)

> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.

> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.

How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.

There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.

  • > How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.

    Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."

    Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").

    • It's only recently some have come to terms with the fact that DNA evidence sometimes returns false positives. Society, and law enforcement, assumed that DNA was infallible. No one apparently wondered millions of people could be reduced to a tiny number of genetic markers apparently having no overlap.

      Danish police had to redo 20.000 DNA tests with a larger set of markeres begin tested, because they jailed someone based solely on a DNA test and did consider that they might have gotten the wrong person, despite the DNA match. It's essentially a human hash collision.

      Identification by AI is going to be the same, except worse, because it's frankly less scientific. Law enforcement, the judicial system and especially the public is simply to uninterested in learning the limitations of these types of systems. Even in the more civilized part of the world police would love to just have the computer tell them who to pick up and where.

      7 replies →

    • not the first instance.

      This was 2023 https://www.youtube.com/watch?v=lPUBXN2Fd_E&t=19s

      A dude in the usa was arrested in a casino by police because the casino's facial recognition software said he had been trespassed before. He hadn't. I think there was height differences and eye colour difference. The police still arrested him, booked him. I think the prosecutors took it to trial.

    • I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%.

      Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

      If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.

      I feel like I'm going crazy with this narrative.

      62 replies →

    • It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it.

      1 reply →

    • So what? There were false arrests and convictions made by misuse of line-ups, DNA, eye-witnesses, photos, bloodstains, fingerprints, etc. since forever. You must also blame all those other technologies, so what do you think the police should use to find suspects? In your view, the more help police have, the worse a job they'll do. Is that actually the trend?

      12 replies →

  • This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability.

    This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.

    • Yes! This is about why mass surveillance and dragnets and the like are horrible. These all suffer from people not being able to understand the base rate fallacy (https://en.wikipedia.org/wiki/Base_rate_fallacy)

      Even if AI facial recognition gets really really good, and is 99.999% accurate, if you use it in this way you are going to arrest more innocent people than guilty people.

      If you find a suspect, who has a lot of evidence pointing to them being the criminal and you run a test that is 99.999% accurate and it tells you they are guilty, they are probably guilty.

      But if you take that same test and run it against the entire population of the country, it is going to find 3500 people that match with "99.999% certainty" That gives you a 0.02% of the person being guilty.

      People don't think like this, though, so they think the person must be guilty.

    • It’s also cops Making the Numbers Go Up by marking down a case file as having progressed because someone is in custody. Which isn’t about justice.

      3 replies →

    • > cops [...] should not have access to these tools

      But what else can (identification via) face recognition be (safely) used for? Absolutely nothing. It's tech that's just made for surveillance.

    • It's not just the shithead cops, it's the voters. All the "Blue Lives Matter", "thin blue line", "back the blue" propaganda works towards giving police infinite powers with zero accountability. This is what voters want and they've said so loudly over and over again.

  • Reminds me of a case that just popped up in my neck of the woods.

    Man gets pulled over on an expired plate. They search based on this fact, find a pill bottle (for Irritable Bowel Syndrome) and magically find he’s trafficking cocaine and fentanyl.

    Months later a lab test exonerates the poor guy.

    https://www.wyff4.com/article/deputies-falsely-identify-ibs-...

    • I've always maintained one of the worst things that can happen to you is sitting in court before a jury of your peers, because most can't comprehend the meaning of the law outside of their feelings. NOW the worst thing is having yourself in the hands of cops who just don't give a damn or became a cop for the use of power.

    • This one seems pretty reasonable - according to the article, the cops pulled him over for swerving lanes (driving unsafely on public roads in a reasonable thing to want to police), and then discovered that he was driving on a suspended license, which he admitted to (it's reasonable to have a system for suspending peoples' drivers licenses that is enforced by the police). The police find the pill bottle and don't believe him when he tells them it's a legitimate drug, then "conduct[..] multiple field drug tests, which produced a positive result for fentanyl. Getchius was taken into custody and transported to the Greenwood County Detention Center. Shortly after, another drug test was completed and returned positive results for cocaine."

      So it wasn't just the pill bottle, it was multiple other drug tests. I think you could make a reasonable argument that drug use shouldn't constitute a crime in and of itself - although it probably should if you're driving a car, for legitimate traffic safety reasons, I don't find DUI laws objectionable. Or you could make an argument that the criminal justice system shouldn't interfere with peoples' decision to use and sell drugs. I'm sympathetic to this myself, but I think especially in the case of opioids like fentanyl, the situation where government paternalism makes it illegal to sell opioids probably discourages enough destructive use of these drugs by unwise or already-addicted people that it's still net-positive in terms of human welfare. I suspect a society where it was simply legal to use and sell opioids would have a lot more human suffering in it than our own (possibly because in the absence of laws banning open opioid dealing, people who are close to severe opioid addicts might simply commit vigilante murders of suspected opioid dealers, and be left unconvicted by sympathetic juries). And once you hold the position that it's legitimate for the government to legally restrict the sale and use of these drugs, then you necessarily have to have something like police and something like a criminal justice system that investigates whether a person might be actually using and selling opioids and then lying about it.

      The fact that the guy was in fact once addicted to some drug and "was working at rehab and addiction centers in Florida at the time of his arrest." is additional evidence that he might have returned to drug use, and there's no way to make cops who investigate opioid-related crimes not think this.

      2 replies →

  • It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.

    Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.

    Computers are wildly dangerous, not because of anything innate but because of how humans act around them.

    • > It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.

      This is literally the plot of most of those books and the way they differ is in how everything falls apart. In some of them the AI supplants us entirely and kills us all. In others it gets taught to kill us all. In others it gets really good at giving us what we ask for until everything falls apart. But it’s taken as a given that unless we change something innate in our culture AI will be our downfall.

    • > If a person had made this facial match, another human would have relentlessly jeered him.

      The glaringly obvious problem here is that our justice system should not be constructed in such a way so as to be reliant on someone's coworker shaming him. That is not a sensible check against a systemic failure. We're supposed to have due process. If someone skips or otherwise subverts due process the justifications don't matter. The root issue is that due process was skipped. Why was that even possible to begin with?

  • > How is this the fault of AI?

    It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]

    > There's a reason why we don't let AI autonomously jail people.

    Yes we do. [1]

    > and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.

    Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.

    [0]: https://thisisreno.com/2026/03/lawsuit-reno-police-ai-polici...

    [1]: https://projects.tampabay.com/projects/2020/investigations/p...

    • To clarify one point, her not having a bail is a function of the way interstate ‘fugitive’ warrants are designed. The Court in Tennessee had no ability to set bail, and until she entered the physical custody of North Dakota she can not have bail set.

      Also, her guilt was not assessed in any common meaning of the term. The requirement for holding a person in custody, with or without bail, is probable cause. The only thing assessed was did law enforcement present a statement to a Judge that was possible to be believed in the light most favorable to the prosecution.

  • > How is this the fault of AI?

    AI is being used by bureaucrats and enforcers to justify lazy, harmful conclusions. You don't live in the real world if you think "just punish the bureaucrats, don't make it about AI" is going to remotely rectify this toxic feedback loop and ecosystem.

    • No, we definitely should punish bureaucrats and enforcers who act negligently. If someone in a position of authority flagrantly fails to do his job and it directly harms someone he should be held accountable. That would provide a strong incentive for future actors to take their responsibilities seriously.

      If an engineer signs off on an obviously faulty building plan and people die as a result we hold him accountable. This is no different.

  • It is the fault of the coders, the salespeople who over-promised the capabilities of the system, the lawmakers who have not regulated or demanded a minimum percentage of accuracy from those products, the AI' company's onboarding trainers, the cops that were trained to use the software, the jailers, and maybe other related positions that should've taken a better interest in making a better system, not a more cruel one

  • It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades.

    The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).

    If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.

    • >rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skill

      The magical past where people had critical thinking skills never existed. We put a lot of trust in tools is because people are unfucking reliable. Hence why in most cases actual physical evidence does a far better job than witness testimony.

      This said, people are lazy. It is one of our greatest and worst traits. When we are allowed to be lazy, especially with tools bad things happen.

  • > How is this the fault of AI?

    The false positive rate combined with scanning millions of pictures might make the chance of arresting the wrong person really high.

  • If many people's writing skills are suffering, due to highly convenient AI support, just imagine how fast mediocre crime investigation skills are going to devolve.

    It is going to get bad in every skilled area of human managed bureaucracy.

    The number of legal filings found to include AI confabulations is just the obvious surface.

  • > Instead of scapegoating an AI bogeyman

    One big reason for AI adoption everywhere is that you can use it as a scapegoat

  • Automation has a strong tendency to degrade diligence.

    I see this all the time in operational / production settings. Having a loop with automation reviewed and approved by a human degrades very fast. I only approve automation that has a quick path to unsupervised operation.

  • > A live human detective confirmed it.

    I doubt it, due to human nature. Perhaps the process says the human must consciously validate, but a lot of humans in many cases will just rubberstamp what the AI said. That's the risk.

  • Study after study has shown a very strong and consistent bias of humans to trust "automated systems" in face of any ambiguity

  • > How is this the fault of AI?

    I'll reply to the top of the discussion too: it's because it was purely made for this purpose. There's no use for it outside surveillance. And it's not even good enough. It's only purpose is checking boxes and transferring money. Miscarriage of justice is an unfortunate, but calculated side effect.

  • Because if you let this slide the human, such as he is, will be removed from the loop and these mistakes will become acceptable once departments get used to how cheap the AI is compared to a human. There will be no going back and mistakes like this will just become accepted collateral damage.

  • > How is this the fault of AI

    It isn't, the article doesn't claim (or even imply) that it is "the fault" of AI, only that AI was part of the chain of events, and nothing is the fault of AI until AI is sufficiently advanced to constitute a moral actor. “At the source of every error which is blamed on the computer, you will find at least two human errors, one of which is the error of blaming it on the computer” remains true.

    OTOH, it is potentially the fault of the reliance human actors put on an AI determination.

  • I think it's more nuanced; it is one error in a Tragedy of Errors.

    • This was not a series of errors, this is (as a statistical inference) the system working as designed. This is not uncommon, it is not unplanned. The extradition of suspects from State to State is designed legislatively to function this way.

      I also think there is more nuance to this situation than AI bad // Human Bad :: choose one. But while a tragedy, the ‘correct’ functioning of a system that produces tragedy doesn’t make that function and error.

  • I think the biggest problem is that the popular narratives about AI enable this like of accountability sink.

    • Before AI it was outsourcing. “Not my fault the system is down and we’re losing 1m an hour, AWS is having a bad day”

  • 100% 100% 100% humanity is so obsessed with ai that we're losing...our humanity. "blame the mindless, soulless robots! how could we have possibly known that they need to be supervised?! aren't they basically just humans that don't need to rest or eat?"

  • > How is this the fault of AI?

    Humans being human. Getting lazy, being incompetent, getting incompetent with AI use or simply being biased. The wrongfully arrested person doesn't even resamble the perpetrator.

    Maybe if they were held accountable forthese actions, they would act responsibly?

  • Where does it say that AI is blamed.

    It says she was misidentified using facial recognition.

    That’s exactly what happened

  • Devils advocate: what if a facial recognition system with a large enough database can always find an unrelated/innocent person that looks similar enough to convince the human?

  • The legal system has long treated a computer match as infallible. This has led to miscarriage of justice on a grand scale.

  • > How is this the fault of AI?

    It is not. It is the fault of the police

    AI models are tools. When mistakes are made they are the mistake of the operator of said tool

    This AI model was badly misused, this woman should get a metric shit tonne of compensation, but it was the fault of the police.

  • At this point I think that AI will perform human duties better than human. So probably it's better to let AI autonomously jail people, of course with all the necessary procedures as required by law.

There's no way this isn't a slam dunk case to sue the piss out of the Fargo Police, probably the US Marshals and maybe other orgs. The woman in the surveillance phone clearly looks way younger, among the many other obvious signs this woman didn't do it. I hope she wrings at least several million dollars out of the government.

> facial recognition showed she was the main suspect in what Fargo police called an organized bank fraud case.

> Her bank records showed she was more than 1,200 miles away, at home in Tennessee at the same time police claimed she was in Fargo committing fraud.

> Unable to pay her bills from jail, she lost her home, her car and even her dog

It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff (who is responsible for the jail inmates). I hope everyone involved in this travesty is sued into oblivion and unable to hide behind their immunity defenses. Facial recognition should never be the sole basis for a warrant.

  • > It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff

    Yes, it's critical to remember that multiple parties can be at fault. In a case like this, it is true that

    a) law enforcement misused a tool and demonstrated extreme negligence

    b) the judiciary didn't catch this, which suggests systemic negligence there too when it comes to their oversight responsibilities

    c) the company selling/providing this AI tool should have known it was likely to be misused and is responsible for damages caused by such predictable usage

    We cannot have a just world until our laws and norms result in loss of jobs and legitimacy as punishment for this sort of normalized failure, from all three parties. Immunity is a failed experiment.

  • Even if she was a read ringer (clearly not the same person to any human who glances at the image), common sense should tell you that among 340,000,000 Americans there are a lot of lookalikes. Clearly there's a kind of stupid belief in the mystic powers of an AI and a callous disregard for the well being of suspects. No one should be dragged 1000 miles and held for months based on a facial match, especially when exculpatory evidence was easily available.

    • To be specific, and it is a lot of the reason why this 5 month delay happened, but she was not dragged then held, she was arrested, then held, then dragged. She was released 5 days after finally getting to Dakota, if they had actually gone and gotten her the hold would have been ~30 days plus the 5 prior to interview and charges dropped.

      It isn’t much of a salve, but the particulars do matter when trying to assess fault to the proper parties (who are still clearly the Fargo cops in this particular tragedy).

  • > It is an AI error

    The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo.

    In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.

    The software worked exactly as intended. It's a filtering tool that sifts through data for common patterns to provide leads, not matches. It raises a flag on persons of interest. You can be a "match" anywhere between 0 and 100% and only relative to some specific input (like that picture taken from the top of the woman at the teller). In that sens mismatches are within acceptable parameters and have been known to happen.

    A "match" is a pronouncement ultimately made by the humans that uses the tool, after they've checked out the leads. Someone slept at the wheel here.

This reminds me of the British Post Office Scandal: https://en.wikipedia.org/wiki/British_Post_Office_scandal

John Bryant, aka The Civil Rights Lawyer, recently did a piece about a similar case of mistaken identity. The consequences weren't as severe, but the willingness to trust the AI over any other evidence was the same:

https://thecivilrightslawyer.com/2026/03/11/ai-software-tell...

In the video, it shows a police officer blindly trusting a casino's AI software, even when a cursory investigation should have given any reasonable person enough of a reason to question whether the man he arrested was the same man accused of a crime. (And then even after it was confirmed he was not, the prosecutor continued to charge him for trespassing!)

Me: Whoa, cool, my hometown is on atop Hacker News!

Also me, reading further: Uh-oh.

The chief of police also resigned today; wouldn't be shocked if this was part of the reasoning.

It’s obvious from the one photo they posted of the actual suspect that the lady they arrested is about 20-30 years older than the woman in the bank photo. The woman in the photo is maybe 25-30 years old, this grandma looks like she’s 65-70 (actual age of 50).

Absolutely ridiculous, I hope she wins her civil case.

I read the article and I don’t really understand… she was held in a jail in Tennessee but the article states they flew her to North Dakota? And somehow she’s a fugitive so that’s why she doesn’t get bail? but she’s a fugitive held in her own state in a holding facility? But then when they release her, she’s in North Dakota? So if some state says you’re a fugitive your home state will just hold you in jail until they come and put you on an airplane? Is that correct?

  • I think you have the interpretation correct. It seems like any state can say you're a fugitive from their state and now you have even fewer rights. Every day I learn some new fact about "justice" in the United States.

  • As a Tennessee resident I don't love learning that some dumb fuck state I want nothing to do with can call me a fugitive and my state will hold me prisoner without trial when said dumb fuck state finally decides is ready to deal with me.

  • I believe each state has its own extradition process. In this scenario think of them more like the countries in the EU. Apparently Tennessee doesn't adequately protect its residents.

I really, really need folks to understand that deflecting blame away from the tool and trying to hold the human accountable feeds right into the marketing playbook of these companies in the first place.

The cops cannot be held accountable because the laws basically give them immunity. The politicians cannot be held accountable beyond being tossed out at the next election, because the laws otherwise give them immunity. The people operating the system cannot be held accountable, because the systems are marketed as authoritative despite being black boxes and lacking in transparency; they trusted the system just as they were told to, and thus cannot be held accountable.

And so when every human in the chain cannot be held accountable for these things, and the law prevents victims from receiving apologies, let alone recourse, then the tool and its maker is the only thing we can hold accountable. By deflecting blame away from the tools ("it wasn't AI, it was facial recognition"; "the human had to sign off on it"; "humans made the arrest, not machines"), you're protecting quite literally the only possible entity that could still potentially be held accountable: the dipshits making these stupid things and marketing them as superior and authoritative when compared to humans.

You want accountability? Start holding capital to account, and this shit falls away real fucking fast. Don't get lost in technical nuance over very real human issues.

  • I disagree. If you focus on holding the software creators to account in lieu of the humans in the loop, the we only reinforce the behavior of offloading thinking to the system.

    If I am a cop in another jurisdiction and I see that in this case of error, the facial recognition company was held to account but not the police or municipality, I will be more likely to blindly trust the software assuming that they either patched it or will take responsibility.

    We should demand accountability for both.

  • You can blame both. The prosecutors and police that didn't do their proper due diligence, falsely imprisoning this woman, and held her for months without due process. And also the AI company that submitted a false police report and the defamation of character. There's no reason for either of them to escape the blame.

  • >Start holding capital to account

    You forgot one: capital cannot be held accountable for making a tool used in a crime. It is a simple generalization of the Protection of Lawful Commerce in Arms Act (PLCAA), passed in 2005, which largely bars civil lawsuits against gun makers and sellers when their products are later used in crime.

  • Strongly agree here. This is an extremely predictable outcome of selling AI facial recognition software to American police forces.

  • Is there anything to suggest this sort of injustice isn't happening in low-tech all the time, constantly, all over the country, and the only reason it's getting attention here is because AI is involved?

    • The scale is not the same. Low-tech tools require more human input, more pre-filtering of suspects. They can't just default to starting with "everybody" and match against millions at the push of a button.

Wow, so many failures of the legal system. While the incompetent/malicious/lazy investigators that used the facial recognition and only that are obviously at major fault, I'd actually put larger blame on the judge that signed the arrest warrant. They are supposed to be a check on such incompetent/malicious/lazy-ness not just a rubber stamp. Unfortunately there's really no recourse against incompetent/malicious/lazy judges.

Of course this would have been bad enough if this had happened where she lived but the holding for 5 months adds a whole 'nother level of insight into brokeness of the legal system. I'd be interested in hearing more about why that happened. Was it just a matter of that happens sometimes if you have a public defender?

>Unable to pay her bills from jail, she lost her home, her car and even her dog. Fargo police say the bank fraud case is still under investigation and no arrests have been made.

I smell a lawsuit

We are rapidly becoming a world where every person is one inscrutable LLM decision from having their life ruined with no recourse.

This type of incident isn't new and is only going to get worse. The problem is our governments are doing absolutely nothing about it. I'll give two examples:

1. Hertz implemented a system where they falsely reported cars as being stolen. People were arrested and went to jail for rental cars that were sitting in the Hertz lot. Hertz ultimately had to pay $168 million in a settlement [1]. That's insufficient. If I, as an ordinary citizen, make a false police report that somebody stole my car I can be criminally charged. And rightly so. People should go to jail for this and it will continue until they do. These fines and settlements are just the cost of doing business; and

2. The UK government contracted Fujitsu to produce a new system for their post offices. That system was allowed to produce criminal charges for fraud that were completely false. People committed suicide over this. This went on for what? A decade or more? But resuted in a parliamentary inquiry and settlements. It's known as the British Post Office scandal [2]. Again, people should go to jail for this.

The choice we as a society face is whether to have automation improve all of our lives by raising everyone's standard of living and allowing us to do less work and less menial work or do we allow automation to further suppress wages so the Epstein class can be slightly more wealthy.

[1]: https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...

[2]: https://en.wikipedia.org/wiki/British_Post_Office_scandal

  • I'm banned from Amazon KDP publishing for life because a fraud detection bot hallucinated that my e-book was plagiarizing my paperback (it didn't realize they're the same book). A bunch of email appeals that I'm pretty sure were also bots went nowhere. With each appeal, the reasons for my ban got progressively more vague, until they didn't mention the plagiarism part at all, just something nonsensical about creating a negative customer experience. Evil company.

  • > The problem is our governments are doing absolutely nothing about it

    Huh. I thought they were actively accelerating the process. Hoping you are right and I am wrong.

This is the part that always gets glossed over in the "AI will solve everything" narratives: these systems don't fail gracefully. They fail with confidence.

The real problem isn't that the AI made a mistake—it's that everyone in the chain deferred to it. The technology became an excuse to stop thinking critically. "The algorithm said so" is the new "I was just following orders."

We need less faith in these tools, not more.

They do not care.

End qualified immunity and see how fast cops start to do their jobs with care.

Winning a lawsuit literally ends in your own community members (not the cops) paying the bill.

The movie "Brazil" was right!

  • Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor. It would be reasonable to trust that the autotypewriter/printer would faithfully output the correct text.

    Modern AI seems incapable of any respectable amount of accuracy or precision. Trusting that to destroy somebody's life is even more farcical than the oppressive police in "Brazil".

    • >Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor.

      It was a literal bug in the computer. Metaphor as humor!

It's not an AI error. It's a human error in mis-using AI in this way. Saying it's an AI error is like saying a hole in your drywall is a hammer error.

Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.

  • It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.

    There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.

    • I worry that blaming AI at all actually incentivizes humans to offload things to AI that should not be offloaded, since it lets them escape blame.

      1 reply →

  • We should probably stop telling the cops that this hammer is great for drywall.

This has nothing to do with AI.

There are also a few questions that remain unanswered:

- Did she have previous arrests, and did they use booking photos to identify her? I found someone named Angela Lipps who was arrested in 2001, 2003, 2017, and 2019. The 2017 arrest was for a probation violation: https://archive.ph/CpmXu The 2019 arrest was for public intoxication: https://archive.ph/yjFL9

- Another confusing detail is that she was in jail for four months without being extradited. That is quite unusual, unless the local authorities were holding her on unrelated charges.

So this news story seems to have nothing to do with AI. It is also very light on details about the case and what actually happened. And actual criminal case here.

  • Appealing to authority ("The AI said it was her!") is absolutely a problem.

    • No. I think the core issue is that they used her 2019 booking photo (a mugshot) from a public intoxication arrest. I am not sure whether a photo like that is reliable :)

      In the end, the detective compared the booking photo with the camera footage and concluded they were the same person, then presented that to the judge.

      I also wonder what her “probation” was for. Maybe she once wrote a bad check and got into trouble, which might have made the detective more inclined to believe it was her.

      Anyway, this does not appear to be an AI issue at all.

      But it is nice scary story to remind us not to be lazy and trust it unconditionally.

      1 reply →

I live in Fargo. The police chief announced his retirement yesterday. Done by the end of the month. And then today this article comes out. So now we pretty much know why the sudden retirement announcement.

This problem predates modern AI. https://en.wikipedia.org/wiki/Computer_says_no is built upon the deliberate abdication of responsibility to processes that cannot be held accountable. AI is just letting them do it at scale.

That doesn't mean we should accept it from AI. We should fight the blind yielding to the facade of authority regardless of whether the decision was made by an AI or an insect landing on a teleprinter at the wrong time.

There's an opportunity for an "AI" app here. Takes your photo, compares with mugshots on police databases, quotes you for requisite cosmetic surgery.

/i

I wish we saw more invocations of speedy trial rights. Trials MUST begin for felony charges in ND within 90 days of a defendant invoking those rights (must be invoked within 14 days of arraignment)[0].

[0] https://ndlegis.gov/cencode/t29c19.pdf

  • Defendants don't invoke that because in most states and federally, they build the case against you slowly over a long period of time before arrest, then stall as long as possible for discovery, then when they finally fulfill discovery they overwhelm you with a bunch of useless stuff so that it takes forever to get the useful information. Invoking right to speedy trial means the prosecution gets a very strong advantage to the defense.

    • I don't think it's a sensible interpretation of the constitution given the massive asymmetry of the situation. The state should be obligated without exception to either provide for a speedy trial or to release the defendant while the state figures its shit out. It should not be a right that can be waived. Meanwhile a defendant who's been arrested should generally be given as much time as he'd like to put together his defense.

lazy stupid pigs should be accountable for misusing AI like this and calling people into a system like that based on some AI's whim and a facebook peek, but having done no actual investigative work.

Lets see the pig that called for her arrest and wasted 4 months of her life spend 4 months in jail.

Considering you can finetune all current in use facial match systems for kyc from pretty much useless to will match even a dog i am not surprised at all by this and surprised it is even remotely allowed.

Something big is missing from this story. How did face ID in ND pick up a matching little old grandma in TN that a TN judge would hold her without bail for 5 months?

Yeah, there is a whole lot more to this story.

Gofundme? This woman needs some $$ and a lawyer. She may not know it yet, but if she makes some smart moves, she's about to be rich and Fargo is about to learn a very hard lesson.

Here is the contact information for her defense attorney, who was appointed in North Dakota, Jay Greenwood. It is unclear to me when he took up this case.

https://www.ndcourts.gov/lawyers/06020 https://www.linkedin.com/in/jay-greenwood-57360b86/

  • What did you hope to achieve by posting his details?

    • It seems like Angela Lipps needs some help. There's no "Go Fund Me" or similar information associated with the article. Perhaps her (former) attorney could provide more information.

Just reading the headline I said to myself: bet this is in America.

Every time I see something like this I can never quite believe this sort of stuff happens. Complete, life ruining incompetence, with no consequences for the idiots that caused this to happen. Ignoring the AI input, which to me has nothing to do with this (it was used as a tool to identify a potential suspect), this woman went to jail for 5 months on the opinion of someone with no other evidence. Only in America.

  • Indeed. Something like the Post Office Scandal would never happen anywhere but in the US.

    • True, though it's hardly a like for like comparison, but on the flip side of that, something is being done now at least. This woman has been monumentally fucked over and no one is going to help her.

      4 replies →

Facial recognition? looks at photo I've probably seen a dozen different people who look exactly like this woman just this week.

> She had already been in jail for more than five months. It was the first time police interviewed her.

Hang on -- ignoring AI completely, how is that possible / legal / anything? Surely, if she was misidentified, there was an interview and arrest and due process?

It is vibe justice for people, when you run Agents and don't check by yourself the code produced or the people jailed. Despite the appearancs your program don't really work and soon or later you find it to your expense.

Wrongful imprisonment isn't something that started with AI. This is why everyone should be against the death penalty: the state cannot be trusted not to make mistakes in determining guilt.

How many more articles are we going to see with the headline AI facial recognition leads to innocent person jailed? A grandmother no less.

Some tech company illegally scanned people's photos on social media and now is using them with our complicit legal system to randomly put people behind bars. Now I need to worry that any day now due to a dice roll I will be sent away in a the middle of f'ing nowhere for months or years. Now the government wants to use these same dumb systems to make automated killing machines. FML!

I see a lot of comments trying to attribute blame to the cops, the lawyers, the police chief, the marshals, the tech bros, etc, but it is all of them and all of us that are guilty. We are so complicit in this sick system we live in. We are stuck in a collective action deadlock.

That fear you have in the back of your mind that says next time it might be you is counteracted by the thought "well thank goodness it wasn't me or a loved one," so you don't act. We are all doing this, that is why nothing changes.

The only people able to act these days are the most insane. The narcissistic corrupt power hungry politician, the psychopathic tech bro billionaire, and the jacobins are the only ones with the energy wade through this cesspool and that is why everything is so dystopian.

AI or not, it's unconscionable that victims of compulsory legal processes by way of mistaken identity are not made whole.

  • People will defend this, too, saying “well, she was eventually exonerated, right? So the system works!” Ignoring how she’ll never be fully reimbursed for the time, money, and grief of going through the system.

    • We also need to question how many people might go through the same process without eventual exoneration and how much going through this process costs individuals. Being falsely prosecuted comes usually imparts a permanent black mark in search results about the person (outside of places with sane laws like the EU) as well as causing stress or permanent injury.

      Wrongly arrested individuals with mental disabilities have a history of physical abuse in jail potentially to the point of death.

    • Not to mention:

      > Unable to pay her bills from jail, she lost her home, her car and even her dog.

      If this is the system "working", then the system is broken.

  • > In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial

    This is from the Sixth Amendment. Where the rubber hits the road is what “speedy” means.

What’s remarkable to me, beyond the total incompetence and stupidity of all the police people involved, is how incredibly aggressive the intervention was.

This is bank fraud case, for god’s sake, not an armed robbery. I don’t know the scale of it, but still, no one said she was a danger to anyone. She was a suspect, not a convict, and she was held at gunpoint while babysitting young children. What in the fucking world?

The US is so fucked up lately. People should chill the fuck out.

This is a badly written story. It should explain if she saw a judge or had a lawyer.

  • You must have been reading something else, because this article includes all of that information.

    > In Tennessee, she was given a court appointed lawyer for the extradition process. To fight the charges, she was told she would have to go to North Dakota.

    > Officers from North Dakota did not pick up Lipps from her jail cell in Tennessee until Oct. 30 — 108 days after her arrest. The next day she made her first appearance in a North Dakota courtroom to fight the charges.

    > "If the only thing you have is facial recognition, I might want to dig a little deeper," said Jay Greenwood, the lawyer representing Lipps in North Dakota.

Wait - what was the AI tool and how did it have her face to begin with? If small-town police are doing face-matching searches across national databases then nobody is safe because the number of false positives is going to be MASSIVE by sheer number of people being searched every day.

Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.

You will never have a criminal AI tool safe enough to apply at a national scale.

There are many cases of harm caused by ML false positives.

There are also some cases of law enforcement successes caused by ML true positives, e.g. a RAF (Red Army Faction - https://en.wikipedia.org/wiki/Red_Army_Faction) terrorist gone into hiding was identified by a social media photo: https://web.archive.org/web/20240305044603/https://www.nytim... (although the success in law enforcement was actually not carried out by the police, but by investigative journalists/podcasters.)

The question we need to answer as a society is if we are willing to tolerate any innocent people to go to jail as the "price" of catching a few more criminals.

The poor don't start revolutions because they lack the means.

"“The pace of oppression outstrips our ability to understand it. And that is the real trick of the Imperial thought machine.”" -- Andor.

https://archive.is/yCaVV - Archive link to get around the paywall.

https://www.theguardian.com/us-news/2026/mar/12/tennessee-gr... - Another article on this without a paywall.

It's annoying that both articles are calling this AI error. This was human error, the police did the wrong thing and the people of Fargo will end up paying for this fuckup.

  • I would argue it was both. No doubt this company was marketing it in a way to make it seem very reliable. And all of the procedural things afterwards made the error so much more damaging.

    But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.

    • It's not an AI error. The face recognition AI simply said that it's a "potential match", which is correct. It's the humans' job to confirm that a potential match is in fact a match, especially when the suspect is 1,900 kms away.

  • They're slapping AI in the title of any article that vaguely relates to get more clicks. This unfortunately works extremely well (see this thread)

    Happens with a lot of topics of interest.

Completely infuriating, but more of a commentary on the sad state of incompetent power-hungry law enforcement with tools they don't know how to use than the tools themselves.

Though, the question remains: are the tools built in such a way as to deceive the user into a false sense of trust or certainty?

_Some_ of the blame lies on the UX here. It must.

  • > are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.

    Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).

    Does any part of the blame lie on the UX if a dev submits a bad change? No, none.

    You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.

    • I agree with all that. Maybe the word isn't "blame," then. Surely there must be some code, perhaps moral or ethical, but ideally more rigorously enforcible, which ought to prevent the development of intentionally deceiving tools. Sure you could say this about all software, but that which can cause actual physical harm ought to be held to a higher standard.

      1 reply →

  • It must land as human's fault or this will become more and more of a pattern to avoid accountability.

    • It’s both.

      The cops need to be held accountable.

      But it’s glaringly obvious that if you build tools like this and give them to the US police this is the outcome you will get. The toolmakers deserve blame too.

  • > they don't know how to use than the tools themselves.

    No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.

    Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.

    > _Some_ of the blame lies on the UX here. It must.

    No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.

    • I miss the days of earlier AI image-recognition software that would emit a confidence percentage.

      New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.

      1 reply →

    • >> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.

      > No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.

      The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.

  • Spoken like someone who isn’t built for a sales role at said company.

    Sales will sell the dream, who cares if the real world outcomes don’t align?

Probable cause? What's that?

Judge/magistrate who signed off on the arrest warrant fucked up.

Theres alot of talk about how the cops just misused the tool and its their fault not the AIs

Thats missing the point here. The point is that these tools provide crazy leverage, and thay can be good or bad. If used carefully it can definitely catch criminals faster, but when misused (or abused) they can let the authorities unjustly ruin lives faster

The question isnt whether AI is perfect or not. Its whether you trust the authorities with it. To use and abuse as they can. Think about the average cop. Think about the way Trump treats people. Think about the way israel keeps an ongoing genocide going. Think about the cases of police brutality that happen in the US, the cases of racial profiling. Think about ICE and their behavior, going around kidnapping and killing people. Do you want these people to have more leverage?

I posted this 9 hours ago. Can I get the karma transferred to my account?

  • As much as we try to reward the first person to submit the story, we also have to give credit to the person who submits the best URL and the best version of the story. It looks like your submission was killed due to being an archive.is link, which is not allowed as a URL for a submission (we need the canonical URL submitted to prevent people from using archive services or shorteners to mask domains that may be malicious).

    Sometimes it's just a matter of luck as to who gets the submission right and gets the karma. Sorry it wasn't you this time, but keep submitting good stuff and you'll get your turn.

I hate this headline (not blaming submitter). Police incompetence and negligence jailed her for months and left her stranded in a North Dakota winter. The AI is no more responsible than the cars and airplanes they used.

Edit: this is in reference to the original headline "AI error jails innocent grandmother for months in North Dakota fraud case" not the revised title that it was changed to.

  • Your picking apart the words doesn't matter if police are more incompetent with AI than without it. AI being the catalyst to a worse society is a more interesting and worthwhile topic than whether "AI is responsible" is the right way to phrase it.

  • A jury will probably decide the AI company's level of responsibility at trial. It is an open question til then!

  • If you make the AI software, then your software malfunctioned.

    If the laser printer screws up a page in the middle of the document, and the user doesn't catch it and includes it in the board of directors binder, the laser printer still malfunctioned.

    • Sure, and if the headline had been that it misidentified an innocent person I wouldn't have had a problem, it's specifically saying the AI jailed her that I think is a dangerous framing by removing police responsibility. In the same way in your example I wouldn't say "Laser Printer gives bad presentation"

  • [flagged]

    • Even if she was guilty, they shouldn't have imprisoned her for 3+ months without interviewing her. The AI didn't tell them to do that.

    • I think you actually agree with the GP? As I understand them, they're saying that it's not the AI tool that takes the most blame, it's the police.

    • Even if the id was correct, why would they leave her in jail for 5 months before the first interview and/or court appearance?

    • > Clearly the police felt the AI was "responsible enough" to be the only thing they needed to trust.

      Yes, that's what the OPs "incompetence and negligence" referred to.

Why the fuck does a newspaper need a ‘notifications’ icon in the top right hand corner?

  • How else can they report on BREAKING NEWS if it doesn't at least break your concentration?

  • Because it has an updating-feed-like structure, in which new items can appear.

    Knowing that there are (N) new items is so useful (to some people), that as far back as the 1990s, we developed technology called "RSS" to give you this superpower over a website that doesn't provide anything of the sort. One that simply updates with new stuff when you hit refresh, with no UI to indicate what is new/changed.