Comment by florkbork
4 hours ago
One of the problems is the fundamentals of their tech works "just enough".
IE; just looking at their puff piece demo for https://www.youtube.com/watch?v=rxKghrZU5w8
- semantic data integration/triplestores/linking facts in a database.
- feature extraction from imagery / AI detection of objects as an alarm
- push to human operators
You or I might expect this to be held to a high standard - chaining facts together like this better be darned right before action is taken!
But what if the question their software solves isn't we look at a chain of evidence and act on it in a legal/just/ethical manner but we have decided to act and need a plausible pretext; akin to parallel construction?
When you assess it by that criteria, it works fantastically - you can just dump in loads and loads of data; get some wonky correlations out and go do whatever you like. Who cares if its wrong - double checking is hard work; someone else will "fix" it if you make a mistake; by lying, by giving you immunity from prosecution, by flying you out of state or going on the TV, or uh, well, that's a future you problem.
To take a non US example: https://en.wikipedia.org/wiki/Robodebt_scheme
Debt calculations were flat out wrong
The unstated goal/dogwhistle at the time was to punish the poor (cost more than it would ever recover)
It was partially stopped after public outcry with a few ministerial decisions.
It took years; people dying; a royal commission and a change of political party to put a complete stop to it.
No real consequences for the senior political figures who directly enacted this
Limited consequences for 12 of 16 public servants - no arrests, no official job losses, some minor demotions.
If the goal of the machine is to displace responsibility; the above example did its job.
No comments yet
Contribute on Hacker News ↗