Comment by nkrisc
1 day ago
Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.
1 day ago
Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.
Yes. But doing the investigation negates much of the incentive for using AI.
Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.
In this case it sounds as though AI could have been used to generate preliminary leads. When someone calls a tip line with information, police don’t just take their word for it, they investigate it. They know that tips they receive may be incorrect. They should have done the exact same here, but they didn’t.
I’m very opposed to AI in general, but this one is clearly human failure.
The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.
...but this one is clearly human failure.
Absolutely.
The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.
So in the same way that the failure rolled down hill, liability needs to roll back up.
AI can provide leads. Someone still needs to verify them and decide.
Generating and verifying bad leads costs money. Not verifying bad leads can cost much more.
At some point, you have to decide if wasting good money on bad intel makes sense.
The article says that the Fargo police claimed to have done "additional investigative steps independent of AI". (Perhaps they're lying, or did a poor job because they thought the extra steps were a formality.)
Given the actual outcome it’s hard to imagine what they actually did. It would be less embarrassing for them if they had said they did no additional investigating.
It's not even the right question, really. If they found some crazy coincidence that genuinely seemed to corroborate the identification, it's still not OK that this woman was dragged across the country. They rightly identify that the initial AI scan was wrong to do even if everything that followed was by the book. Our law enforcement processes were developed in a context where this kind of error was much harder because there was no routine way to scan every person in the United States for people who look like your suspect.