Comment by pj_mukh

18 days ago

I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%.

Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.

I feel like I'm going crazy with this narrative.

> I feel like I'm going crazy with this narrative.

We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.

What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.

They're going to fall for it, without a second thought.

And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.

When you foster a culture of impunity and passing the buck, don't be surprised when they pass the buck to the inscrutable black box they bought.

You might even argue that's the purpose of the inscrutable black box.

The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology?

  • AI facial recognition is smarter than what they are capable of. That's not the issue. It is much faster than a human, and state-of-the-art models make fewer errors than a human (though the types of errors are not the same).

    The issue is that facial recognition is just not very reliable. Not for humans and not for machines. If you look at millions of people, some of them just look incredibly similar. Yet police apparently thought that was all the evidence they will ever need. A case so watertight there's no point in even talking to the suspect

    • So the sane solution here is just leaving unreliable stuff to humans and reliable to machines. Especially so when human wellbeing and freedom are at the stake.

      To define the line between the two, calculate the percentage of cases when mainstream CPUs return anything but integer 4 after addition of integer 2 and integer 2, and use that as the threshold to define "reliable".

  • > The “I” in “AI” stands for “intelligence”

    By that logic the “I” in Siri is 2x more intelligent.

  • Because they are supposed to possess minimum levels of intelligence found in homo sapiens, which includes not believing anything a salesperson says.

    Also, their whole job is dealing with people who constantly lie to them.

As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm.

  • all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.

    •   "Among his accomplishments has been establishing the department’s Real Time Crime Center that leverages technology and data to support officers in responding more effectively to incidents," the city's release said. "Zibolski also prioritized officer wellness initiatives to strengthen mental health resources and resilience within the department. He reinstituted the Traffic Safety Team to focus on roadway safety and proactive enforcement, and ... played an active role in statewide discussions on various issues affecting law enforcement."
      

      From the same article... He spearheaded a push to "leverage technology and data to support officers in responding more effectively to incidents", then that same technology mistakingly ruins a woman's life by passing along a hit to an officer who compared with her FB photos and said "sure, seems right".

      The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out.

    • > someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.

      Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened.

      5 replies →

You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition.

But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?

I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.

  • Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.

    • It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'.

      3 replies →

I feel like I'm going crazy that anyone tries to suggest the AI and the producers and promulgators and apologists of AI played no part and bear none of the responsibility in this narrative.

  • Because the responsibility lies on the part of the criminal justice system who used the flimsy AI facial recognition evidence to arrest and hold her for months. If AI didn't exist, and this same incident happened because a human looked at a photograph of the woman and said "I think this might be the same person who committed the crime in the video", it would be insane to blame the people who invented photographs or video recording for her arrest.

    • The problem is in how these tools are sold to them. Not everybody can be an expert in every topic. Like in every other application area, these AI systems are promoted as being able to do about a thousand times more, and a million times more reliably, than they actually are. Of course the departments can be expected to do some due diligence and instruct their officers, but the lies by AI system suppliers is where a large part of the blame belongs. Manufacturers of cameras or CCTV systems never told the police department that the system would do their job for them.

You are exactly correct. Cops cannot be trusted. We spent a lot of time pointing that out in 2020. AI is the least of our problems with policing.

Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020.

  • Are we just gonna pretend the wide implementation of bodycams hasn't shown that the overwhelming majority of times the cops weren't in the wrong to a point that the same people that demanded them want them gone now?

    • Citation needed. Who are these people who wanted improved police oversight who are supposedly now fighting for the removal of bodycams?

But it's not totally irrelevant in this story.

Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results.

What or who is at fault isn't either/or, it's a bunch of compounding factors.

You’re on the right track here but I don’t think it should be hand-waved away as “the least of your problems” - it’s yet another weapon that police in the USA can use against the population with impunity. They’re going to have to reckon with all of this in the coming years - cops having guns and armored cars, “qualified immunity”, the “stop resisting” workaround for brutality and now this AI

> Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

It's absolutely absurd. The argument that AI is the problem is literally the people arguing against AI shedding responsibility to the machines. The people arguing that AI is the problem are essentially (philosophically) the same people who will say it was the AIs fault.

The thing that it most reminds me of is people trying to stop the deaths and injuries that come as a result of "swatting" by being really angry at people who "swat" and proposing the harshest punishment for it that they can come up with (or outdoes anyone else in the thread.)

The problem with swatting is that police were showing up to the houses of harmless people based on anonymous phone tips and murdering them. You guarantee swatting will work indefinitely when you indemnify the cops.

You don't need AI for injustice in the US justice system. There is literally no part of the US justice system that makes sense at all, and even in the best case scenario when the guilty are caught, tried, and punished, it is tremendously wasteful, cruel, and ass-backwards. Juries are basically the AI of the US justice system, allowing the prosecutorial and enforcement apparatus to be infinitely cruel, illogical, self-serving and incompetent. 12xFull AGI. AI couldn't do any worse.

> I feel like I'm going crazy with this narrative.

You're not alone.

You’re going crazy because up until this exact moment you’ve never had to confront the reality that these tools, placed into the hands of the common man, are viewed as authoritative and lack any accountability or consequence for misuse.

For anyone who has been victimized by law enforcement or governments before, we’ve been warning about this shit for decades. About the lack of consequence for police brutality. The lack of consequence for LPR abuse. The lack of consequence for facial recognition failures and AI mismatches.

You need to understand that by using these systems correctly and holding yourself accountable, you are in the minority. Most people do not think that critically, and are all too happy to finger the computer when things go badly.

And until you accept that, and work to actually hold folks accountable instead of deflecting blame away from the tool, then this won’t actually change.

  • Your answer presumes we cannot hold people accountable. I think that is incorrect.

    • Do you mean hypothetically could society hold law enforcement personnel accountable for mistakes, bad judgement, flagrant criminal conduct, horrendous abuse of any and everyone? Certainly, a large scale and comprehensive restructuring of America’s law enforcement and prosecutorial system is legally possible.

      However, I hold to the opinion that if you are discussing actual reality, based on decades (if not the entire period post civil war, for near certainty) of historical examples and the current “majority” position of the US electorate: there is a nearly unqualified NO. We cannot, or will not, hold law enforcement accountable for even intentional, planned, and malicious conduct in a vast majority of cases. There is practically no accountability at all, and that’s just for thoroughly proven intentional conduct. Bad judgement, alleged mistakes, etc are even less able to result in any action.

      The reality of the legislation and precedent ensure it. It’s not a bug, it’s a feature.

It's called qualified immunity. Many support its repeal. I hope you join them, and convey the same to your local representatives and candidates. Until it is reformed few if any officers or administrators of criminal justice in the United States will ever feel any type of accountability.

Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. Criminal charges against officers are exceedingly rare. She should be able to sue this detective directly. Of course she can sue the government too, and should. But without any personal consequences for the people carrying out these acts, taxpayers will continue to bail out these practices without ever noticing. Your own government should not be a shield for a police officer who has violated you or your neighbors.

  • > Many support its repeal.

    There's nothing to repeal. Qualified immunity is a doctrine that the judicial branch made up out of thin air, with no legislative backing.

    But agreed, we need legislatures to write laws that expressly hold police accountable, and declare that they are not shielded from liability when things go wrong due to their own failures and negligence.

    • Not that it changes your point, but, um actually:

      While the origins of qualified immunity are judicial, some State loved the idea so much the went and made it statutory too. Louisiana’s 2024 bill explicitly removes negligence as an exception (which is a valid method to circumvent qualified immunity based on jurisprudence at the federal and most state levels). Louisiana requires intentional violations or criminal actions to even be able to bring a claim.

  • > Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice.

    And frequently not even then.

You can hold someone responsible only after they've actually fucked up. And with the way things move in the criminal justice system, that can take months to discover. Holding them responsible doesn't really fix anything, it's purely reactive.

Dude, not sure which team are you working in, but across many-many domains - corporate, business and political, people are already delegating full decision making and responsibility to AI. Unless national governments and standards institutions create and enforce ironclad AI governance laws, situations resembling what this poor granny went through are going to occur again, again and again.

  • There’s money to be made selling AI plausible deniability machines that allow end users to enact unethical policies while evading accountability, but only if all moral responsibility ostensibly falls on the end user and none on the dealer.

When are cops ever treated the same way as the rest of us?

  • Well in most cases I would prefer to have a cop's word to outweigh a word of an average joe.

    • You should tell that to Angela Lipps, I'm sure she told every cop she came in contact with she had never been to Fargo. Cops have a responsibility to do their job, part of that job is listening and relying on proof. ALL those cops were either too lazy or were afraid of their superiors. This is unacceptable for the amount of power and information they have access to. We should either de-fund the police system or reform the hell out of it. BTW, where was her state representative during this fiasco?!?

    • The belief by a juror that law enforcement personnel, especially phrased as a belief that applies to law enforcement personnel as a generic group, is a well established basis for a challenge for cause leading to exclusion of that person from being a juror. The US jury system is build explicitly on excluding these types of belief in juries in order to ensure fairness, impartiality, and individual and case/witness specificity of “triers-of-fact”.

      I could understand someone who disagrees with it, but your position would be antithetical to current and historical thought on what defines a fair jury.

      3 replies →

    • Why should having that particular job give you that privilege? All should be equal before the law.

I mean, this is the USA we're talking about. Cops are given huge authority over everyone else, with poor accountability. AI just lets them pretend to be even less accountable. And by "pretend" I of course mean "get away with it".

See, AI was used to accelerate arrest and jailing, but not to follow through. It was not used to ensure her well being. Clearly this demonstrates that AI contributes to treating humans inhumanely, and demonstrably AI is not used to improve anyones quality of life. Stop making excuses for "AI not at fault here".