Comment by alilleybrinker
5 months ago
Cathy O'Neil's "Weapons of Math Destruction" (2016, Penguin Random House) is a good companion to this concept, covering the "accountability sink" from the other side of those constructing or overseeing systems.
Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in.
https://www.penguinrandomhouse.com/books/241363/weapons-of-m...
Brings to mind old wisdom:
"A computer can never be held accountable, therefore a computer must never make a Management Decision." IBM presentation, 1979
"A computer can never be held accountable, therefore all Management Decisions shall be made by a computer." - Management, 2 seconds later.
Therefore all management decisions are made by the people writing the code
Hence coders are the new managers, managers just funnel the money around, a job which can be automated
1 reply →
Admittedly the context matters “we are trying to sell to Management, therefore let’s butter them up and tell them they make great decisions and they won’t get automated away” while the next page of the presentation says “we will Automate away 50% of the people Working for you saving globs of money for your next bonus”
IBM in 1979 was not doing anything different to 2024. They were just more relevant
> presentation, 1979
= Presentation, 21st Century
A computer is not alive. A computer system is a tool that can do harm. It can be disconnected or unplugged like any tool in a machine shop that begins to do harm or damage. But a tool is not responsible. Only people are responsible. Accountability is anchored in reality by personal cost.
= Notes
Management calculates the cost of not unplugging the computer that is doing harm. Management often calculates that it is possible to pay the monetary cost for the harm done.
People in management will abdicate personal responsibility. People try to avoid paying personal cost.
We often hold people accountable by forcing them to give back (e.g. community service, monetary fines, return of property), by sacrificing their reputation in one or more domains, by putting them in jail (they pay with their time), or in some societies, by putting them to death ("pay" with their lives).
Accountability is anchored in reality by personal cost.
See also: “To err is human, but to really foul things up requires a computer.” —Paul Ehrlich
To err requires a computer
To really foul things up requires scalability
I want to note here that this is illegal in the EU. Any company that makes decisions algorithmically (EDIT: actually, by an AI, so maybe not entirely applicable here) must give people the ability to escalate to a human, and be able to give the user information for why that decision was made the way it was made.
It's much easier to hold an algorithm accountable than an organization of humans. You can reprogram an algorithm. But good look influencing an organization to change
That is not accountability. Can the algorithm be sent to jail if it commit crimes?
Yes. Not literally of course, but it can be deleted/decommissioned, which is even more effective than temporary imprisonment (it's equivalent to death penalty but without the moral component obviously).
1 reply →
Is the point revenge or fixing the problem? Fixing the algorithm to never do that again is easy. Or is the point to instill fear?
12 replies →
Interesting that you mention jail… the rule of law is kind of the ultimate accountability sink.
You now have to not only find someone responsible for the algorithm but also competent and with permission to do it. Isn't it clear that this is very hard?
"Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in."
Algorithms are used by people. An algorithm only allows "harmful and unaccountable systems" if people, as the agents imposing accountability, choose to not hold the people acting by way of the algorithm accountable on the basis of the use of the algorithm, but...that really has nothing to do with the algorithm. If you swapped in a specially-designated ritual sceptre for the algorithm in that sentence (or, perhaps more familiarly, allowed "status as a police officer" to confer both formal immunity from most civil liability and practical immunity from criminal prosecution for most harms done in that role), it functions exactly the same way: what enables harmful and unaccountable systems is when humans choose not to hold other humans accountable for harms, on whatever basis.
Yeah, I think you're conflating the arguments of "Weapons of Math Destruction" and "The Unaccountability Machine" here.
"The Unaccountability Machine," based on Mandy's summary in the OP, argues that organizations can become "accountability sinks" which make it impossible for anyone to be held accountable for problems those organizations cause. Put another way (from the perspective of their customers), they eliminate any recourse for problems arising from the organization which ought to in theory be able to address, but can't because of the form and function of the organization.
"Weapons of Math Destruction" argues that the scale of algorithmic systems often means that when harms arise, those harms happen to a lot of people. Cathy argues this scale itself necessitates treating these algorithmic systems differently because of their disproportionate possibility for harm.
Together, you can get big harmful algorithmic systems, able to operate at scale which would be impossible without technology, which exist in organizations that act as accountability sinks. So you get mass harm with no recourse to address it.
This is what I meant by the two pieces being complementary to each other.