Accountability sinks

5 months ago (aworkinglibrary.com)

I remember experiencing this in one of the German airports/airlines and having that exact thought.

It was this fully automated airport, where the checkin is self serviced and you only interact with computers.

Eventually, when I inserted my boarding pass I had a printed piece of paper back that said that they had to change my seat from aisle to midseat

I then tried to find someone to talk to the entire way, but computers can only interact in the way the UI was designed, and no programmer accounted or cared for my scenario

The ground attendant couldn't have done anything of course because it wasn't part of the scope of her job, and this was the part of germany where nice was not one of their stereotypes.

Eventually I got a survey a week later about a different leg of the flight, so could I really complain there? that one was fine? I had a paranoid wonder if that was intentional

  • Germany is somewhat rubbish.

    I arrived at the train station in the night after 6 hours train journey. German Railways app shows there will be my final leg train in 45 minutes. I wait in the cold at night, sitting in the station building because it's warmer there. 5 minutes before departure I go on the platform. The local display shows no train, even though the all still shows it. I waited for nothing.

    Syncing the app with the train station? Somebody else's problem.

    In half an hour there should be a replacement bus for another cancelled train. There are no signs in the app or the station that indicate where that bus is to be found. You just need to know.

    Putting sings for replacement buses due to degraded service that's long planned and already happening for 2 months? Somebody else's problem.

    An old man asks if the bus will allow to catch the train connection at its destination. The bus driver bitches at him for asking that question -- not his job. Somebody else's problem.

    Training the bus driver that, being an official replacement of a train, he needs to know that, clearly also somebody else's problem than that of the German Railways.

    It's pitch black outside, the windows are opaque due to moisture, so I can't tell where we are even though I was born the area and lived here for 18 years. The bus driver makes no announcements about the stops, there is no display. Knowing when to request a stop to get off? Somebody else's problem.

    The bus is ice cold for an hour. When am old lady gets off and tells the bus driver that it was freezing all journey, he asks "well what can you do". Bewildered she answers "turn on the heating"? He didn't expect that. He seemed to think that everything except driving was somebody else's problem.

    This is just one night's bus journey story. I also got my SIM card deleted and a parcel was lost in the subsequent week. Documenting here the amounts of "somebody else's problem" I encountered in their customer support hotlines is somebody else's problem for me for now.

    • There is some degree of accountability for DB: Other organizations like Swiss and Austrian railways stopped taking schedules of DB seriously and stopped waiting or booking through.

    • I used to work with German people (I’m Finnish) and despite being pleasant people, simple things took a long time. It was always something to do with the responsible person not being available, perhaps on holiday or sick leave, and it wasn’t possible for anyone else to take over their responsibilities.

      I got the feeling that papers were being pushed around from desk to desk until a vacant desk came along and progress stalled.

      In the same job, I worked with Americans. Very nice people and super easy to get along with. Always friendly and with a healthy sense of humor. A certain lightness of heart was always present even when dealing with urgent or negative matters. Only thing was that they made a lot of mistakes that often didn’t seem accidental — I saw a bit of negligence, along the lines of “if someone took just one look at this, they’d be able to tell in seconds what’s wrong”.

    • > It's pitch black outside, the windows are opaque due to moisture, so I can't tell where we are even though I was born the area and lived here for 18 years. The bus driver makes no announcements about the stops, there is no display. Knowing when to request a stop to get off? Somebody else's problem.

      I have experienced this many times. Thankfully the bus drivers here in Hungary are pretty helpful (well, in my county at least), and worst case: you ask other passengers who also happen to be friendly. When it is pitch black outside and the windows are opaque due to moisture, it is not only your problem, but everyone else's, and people often find a way to cooperate and work together.

  • I had a similar experience in Germany about a year ago. Train stations are mostly self-service now. The ticket kiosk ate my €50 and promptly rebooted. It didn’t print a receipt or anything. The only human I could find was a security guard. He told me to call the number on a sticker on the machine. The person who answered couldn’t speak English. My €50 is out there somewhere but it would cost me more than that to track it down.

    • That’s a sad experience and I would definitely try to chase them robots. Sadly even though German public transport fascinates with its ease of use and quality, though when it comes to human service you can find yourself in peculiar position. And particularly if you are not German and happen to be in one of those international cities there where Germans are fed up with visitor. You waive goodbye to your 50€ and keep a story to tell, that’s all.

      Sadly I don’t expect this all to get any better with robots and LLms and thing. We will be crying to meet a human sooner than later, and my hope is this far cry will eventually get us to the dawn of new era when you actually have people in the loop, just for humanity’s sake.

      13 replies →

    • > The person who answered couldn’t speak English.

      It sounds like this was the main point of failure. I’m not sure it can be considered an error in the system. I’d consider the risk inherent in traveling in a country without knowing its language.

      24 replies →

    • I had something similar happen to be on the tube in london. My ticket got demagnetised (combined intercity rail with travel card are/were still magnetic stripe tickets) and there were not staff at the station so I could not get the barrier open to leave.

      1 reply →

  • Many businesses build walls around themselves like this.

    Hiding the customer service number. Making an FAQ that is missing the common but time-consuming questions. Chatbots instead of people.

    I remember when amazon sent me a package once, said it was delivered, but it was nowhere to be found. There was no way to get help. They did have an FAQ at the time that said to check in the bushes.

    What was annoying was the search auto-complete had many variations of "package not found says delivered"

    Now, it is a little more filled out but still.

    • I've got an actual email address to a real business, but the humans* are struggling with the concept of "$company created the account with the wrong billing address, ignoring my agent who could have received it when my agent did contact $company, it's provably $company's fault that the bills were not received, so $company must tell me who this debt collector is and refund me for the late payment penalties and admit their own fault to the CRA".

      * not that I could tell if they were LLMs

    • I just switched ISPs, and the new one has one of the most obnoxious phone processes I've ever interacted with.

      I go through the usual hoops: press 1 for English, "we detected an account linked to the number you're calling from, is that that you're calling about?" ... Press 1 for support, press 1 for Internet, "no outages detected in your area. Most problems can be solved by rebooting your modem. Press 1 if you want to try rebooting." (Pause)... "thank you for your call click"

      First off, rebooting doesn't solve my problem. But I guess I have to try anyway?

      So I call back, this time I do pick to reboot, and get "your modem will reboot in the next few minutes, and could take up to 10 minutes to come online. If things still aren't working, try our online support chat"

      So, basically there doesn't seem to be any phone technical support (with a human), at all.

      Also, rebooting is offensive to me as a programmer. Kernel updates and memory leaks are the only reason you need to reboot. How absolutely shitty is modem firmware that the ISP actually spent the time to build this reboot system out??(Never mind that I personally don't feel like I've ever had a modem/isp actually problem solved by rebooting)

      Made me wonder if I should have switched.

      13 replies →

    • I've started just sending physical, paper, letters if I need to communicate with a company. It seems to have a better success rate.

  • At this point everyone needs to get in the habit of using small claims court. You can often do it online in a few minutes these days.

    Make a good faith effort to get your problem addressed, and record the fact that you've done so to use in your hearing if it gets that far. Then just file the claim. Generally they fold immediately, and this way you incentivize actual customer service in the only language they understand.

    • >At this point everyone needs to get in the habit of using small claims court. You can often do it online in a few minutes these days.

      what country is this "small claims court" in? And are you sure this country's small claims works the way your country does?

    • I do agree but also feel if people did this en masse, that system would get a rate limiter. After 2 claims per year you would be barred for being "vexatious"

      6 replies →

  • It’s a way to fully automate a Brazil scenario. [https://en.m.wikipedia.org/wiki/Brazil_(1985_film)]

    Since at least in that scenario, there were humans in the Bureaucracy that could (but didn’t particularly) feel bad.

    In this scenario, no humans need to be directly involved, which allows the scope and scale to be even more Dystopian.

  • The experience is quite similar with DHL when you have a non-standard question. The chatbot is utterly useless and there is no way to contact a human being if you're not a business customer.

    • There is a way, but it is difficult. Most companies don't want you to call. Operators cost a ton of money. And what choice do you have really? Choose a different parcel service? All others are worse.

  • I can provide another POV to that story. We checked in as a family of four, and we're assigned seats in four different rows, with a two and a four year old. Only when entering the plane we had the possibility address this to a human and we were assigned new seats.

    So this might be the reason you had to change seats.

    • they claimed they had to change planes though i had selected that seat when booking the flight, and there were no humans available to address such issues

  • Does Germany have a consumer protection agency? I might have complained there after the flight.

  • Norman talks about how systems need a way to veto or override the automatic decisions to be humane.

    That book is now almost old enough to have a programming job.

I've long thought that that is one of the main functions of corporations. There's a reason they're called limited liability. The fact that you can conjure up new companies at a whim makes it easy to shuffle responsibility into an obscure corner.

This is a strong reason that corporations should not be considered people. People are long-lived entities with accountability and you can't just create or destroy them at will.

  • At a more basic level money eliminates the need for social obligation. There is no expectation of reciprocity or mutual respect. You pay for a product, it is delivered and that is the end of it. Corporations do this within their own internal economy or with partner companies. A cost centre pays an amount of money and delegates responsibility.

    • Enjoying the benefits of living in a society (a degree of trust, no deadly combat, services like police) without suffering its liabilities (mandatory politeness and respect).

      It's the profitable course.

  • I agree with the feeling, but State orgs are effectively eternal (think the various level of government) and still great at diffusing accountability to various scapegoats

    • State orgs (and federal ones) often have length processes before they can do stuff though.

      As well as after they do something there is typically a recourse path provided by that org for you to protest their decisions and if that doesn't resolve favorable you can also sue them.

      Which differs from the article because the corporation doesn't provide any protest path nor did it have to publish any memo/etc describing how they're going to downsize cleaning for cost-savings. But you can still sue them (but good luck showing damages over an unclean room)!

      2 replies →

  • > I've long thought that that is one of the main functions of corporations.

    Ambrose Bierce already hit the nail on the head in 1911:

    "Corporation, n. An ingenious device for obtaining individual profit without individual responsibility."

    It has long since baffled me this isn't being talked about more – I guess everyone is just so used to it. As far as I'm concerned the entire concept of "fining a company" should be abolished and replaced with the criminal persecution of those who did the illegal thing.

  • Just to be clear, LLC is supposed to be about limited financial liability, not criminal liability. But we seem to have forgotten that on the way.

  • The buck has to stop somewhere and a human has to be responsible for things.

    • Oh sweet summer child. Companies are frequently structured and created in multiple jurisdictions to obscure beneficial ownership, responsibility, profits and taxes.

      1 reply →

  • Limited liability corporations are a relatively new concept and there is certainly scope to change how/when/where they could be created and run, for example.

    • I recently had to submit a copy of my drivers license to the feds, for my LLC. I have heard that they are working on the accountability shield for LLCs.

  • > People are long-lived entities with accountability and you can't just create or destroy them at will.

    This notion is currently being contested

  • Yeah, this dysfunction is not a bug, it's the feature. In some ways, it's useful, because it allows positive risk-taking that could not be taken if anyone was actually held (or even just felt) accountable. But at this point, as a society, we've shifted too far towards enabling accountability-free behavior from corporations.

    I think a good example of the dichotomy here is Starlink. On one hand, it's an incredibly useful service that often has a positive impact. On the other hand, a private corporation is just polluting our low earth orbit with thousands of satellites.

    It's not clear to me where exactly the right balance for something like this should be, but I do think that as of today, we're too far on the lessez-faire side.

    • > I think a good example of the dichotomy here is Starlink. On one hand, it's an incredibly useful service that often has a positive impact. On the other hand, a private corporation is just polluting our low earth orbit with thousands of satellites.

      Seems like a terrible example to me. I'm no fan of Musk, but I don't see how that is "polluting".

      They provide an excellent service. They're a minor hindrance for astronomy, true, but I think it would be hard to make a good case for that a few people having a good view of the sky is more important than millions having good communications.

      Then there's that there's nothing really special about Starlink. It's merely one of the first users of cheap rocket launches. It could be somebody else, or 1000 different entities launching smaller numbers, in the end the effect on astronomy would be the same.

      6 replies →

    • Sorry but I find your example totally wrong. Things like radio frequencies and space launches are hard regulated by govs, no corporation can launch satellites at will without permission from the government(s).

      1 reply →

Cathy O'Neil's "Weapons of Math Destruction" (2016, Penguin Random House) is a good companion to this concept, covering the "accountability sink" from the other side of those constructing or overseeing systems.

Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in.

https://www.penguinrandomhouse.com/books/241363/weapons-of-m...

  • Brings to mind old wisdom:

    "A computer can never be held accountable, therefore a computer must never make a Management Decision." IBM presentation, 1979

    • "A computer can never be held accountable, therefore all Management Decisions shall be made by a computer." - Management, 2 seconds later.

      2 replies →

    • Admittedly the context matters “we are trying to sell to Management, therefore let’s butter them up and tell them they make great decisions and they won’t get automated away” while the next page of the presentation says “we will Automate away 50% of the people Working for you saving globs of money for your next bonus”

      IBM in 1979 was not doing anything different to 2024. They were just more relevant

    • > presentation, 1979

      = Presentation, 21st Century

      A computer is not alive. A computer system is a tool that can do harm. It can be disconnected or unplugged like any tool in a machine shop that begins to do harm or damage. But a tool is not responsible. Only people are responsible. Accountability is anchored in reality by personal cost.

      = Notes

      Management calculates the cost of not unplugging the computer that is doing harm. Management often calculates that it is possible to pay the monetary cost for the harm done.

      People in management will abdicate personal responsibility. People try to avoid paying personal cost.

      We often hold people accountable by forcing them to give back (e.g. community service, monetary fines, return of property), by sacrificing their reputation in one or more domains, by putting them in jail (they pay with their time), or in some societies, by putting them to death ("pay" with their lives).

      Accountability is anchored in reality by personal cost.

  • I want to note here that this is illegal in the EU. Any company that makes decisions algorithmically (EDIT: actually, by an AI, so maybe not entirely applicable here) must give people the ability to escalate to a human, and be able to give the user information for why that decision was made the way it was made.

  • It's much easier to hold an algorithm accountable than an organization of humans. You can reprogram an algorithm. But good look influencing an organization to change

  • "Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in."

    Algorithms are used by people. An algorithm only allows "harmful and unaccountable systems" if people, as the agents imposing accountability, choose to not hold the people acting by way of the algorithm accountable on the basis of the use of the algorithm, but...that really has nothing to do with the algorithm. If you swapped in a specially-designated ritual sceptre for the algorithm in that sentence (or, perhaps more familiarly, allowed "status as a police officer" to confer both formal immunity from most civil liability and practical immunity from criminal prosecution for most harms done in that role), it functions exactly the same way: what enables harmful and unaccountable systems is when humans choose not to hold other humans accountable for harms, on whatever basis.

    • Yeah, I think you're conflating the arguments of "Weapons of Math Destruction" and "The Unaccountability Machine" here.

      "The Unaccountability Machine," based on Mandy's summary in the OP, argues that organizations can become "accountability sinks" which make it impossible for anyone to be held accountable for problems those organizations cause. Put another way (from the perspective of their customers), they eliminate any recourse for problems arising from the organization which ought to in theory be able to address, but can't because of the form and function of the organization.

      "Weapons of Math Destruction" argues that the scale of algorithmic systems often means that when harms arise, those harms happen to a lot of people. Cathy argues this scale itself necessitates treating these algorithmic systems differently because of their disproportionate possibility for harm.

      Together, you can get big harmful algorithmic systems, able to operate at scale which would be impossible without technology, which exist in organizations that act as accountability sinks. So you get mass harm with no recourse to address it.

      This is what I meant by the two pieces being complementary to each other.

This is a terrible example because it's simply not true:

> Davies gives the example of the case of Dominion Systems vs Fox News, in which Fox News repeatedly spread false stories about the election. No one at Fox seems to have explicitly made a decision to lie about voting machines; rather, there was an implicit understanding that they had to do whatever it took to keep their audience numbers up.

Rupert Murdoch conceded under oath that "Fox endorsed at times this false notion of a stolen election."[1] He knew the claims were false and decided not to direct the network to speak about it otherwise.

Communications from within Fox, by hosts, show they knew what they were saying was false.[2]

These two examples clearly fit the definition of lying [3].

The "External Links" section of Wikipedia gives references to the actual court documents that go into detail of who said what and knew what when [4]. There are many more instances which demonstrate that, indeed, people made explicit decisions to lie.

[1] https://www.npr.org/2023/02/28/1159819849/fox-news-dominion-...

[2] https://www.nbcnews.com/politics/elections/dominion-releases...

[3] https://www.dictionary.com/browse/lie

[4] https://en.m.wikipedia.org/wiki/Dominion_Voting_Systems_v._F...

  • I think the point of the citation is that there wasn't an original decision to lie about it.

    It happened without coordination and later on wasn't stopped by the people in management, either.

    It was number-2 all the way up.

  • Voting machines are hacked every year at DEFCONs voting village. They're wildly insecure and no one should trust them. Frankly, any claims of manipulation of voting machines are at worst plausible.

    • Accounts are hacked every second. They are wildly insecure and noone should trust them. Frankly saying you are just a hacked account is at worst plausible.

      You logic is flawed at the core. With that train of thought you can infer everything.

      Why trust voting it can be manipulated.

      2 replies →

My suspicion I'd that one of the major appeals of automation and especially "app-ification" for management and C-Suite types is specifically its ability to break accountability links.

A lot of corporations now seem to have a structure where the org chart contains the following pattern:

- a "management layer" (or several of them) which consists of product managers, software developers, ops people, etc. The main task of this group is to maintain and implement new features for the "software layer", i.e. the company's in-house IT infrastructure.

Working here feels very much like working in a tech company.

- a "software layer": This part is fully automated and consists of a massive software and hardware infrastructure that runs the day-to-day business of the company. The software layer has "interfaces" in the shape of specialized apps or devices that monitor and control the people in the "worker's layer".

- a "worker's layer": This group is fully human again. It consists of low-paid, frequently changing staff who perform most of the actual physical work that the business requires (and that can't be automated away yet) - think Uber drivers, delivery drivers, Amazon warehouse workers, etc.

They have no contact at all with the management layer and little contact, if any, with human higher-ups. They get almost all their instructions through the apps and other interfaces of the software layer. Companies frequently dispute that those people technically belong to the company at all.

Whether or not those people are classified as employees, the important point (from the management's POV) is that the software layer serves as a sort of "accountability firewall" between the other two layers.

Management only gives the high-level goal of how the software should perform, but the actual day-to-day interaction with the workers is exclusively done by the software itself.

The result is that any complaints from the worker's layer cannot go up past the software - and any exploitative behavior towards the workers can be chalked up as an unfortunate software error.

  • In my opinion it's even more complicated, as the "management layer" is also using these tactics against itself. "You must use an iPhone", "You cannot expense this trip with company card", "Your permission request to do XYZ in our cloud was automatically declined", "This tool only works in Google Chrome". Why? "The rules say so" / "The system says so". Who set "the rules"? Who set up "the system"? Nobody seems to know, and digging into it yourself is a herculean effort and usually a waste of time.

  • If you think back to less automated times, management was the programming —- you built instructions and procedures that allowed organisation to scale and improve your end product.

    The only thing that changed is that now instructions and procedures are oftentimes executed by software and hardware, not by actual human beings. Hence the use of software engineering wing, in addition to your usual, sorry for the lack of better word, “meat programmers” aka organisational execs.

    Interestingly, the end result customers get has not changed, despite many people coloring it that way. People still get same cup of coffee or a taxi ride, just quicker/cheaper/marginally better. But such incremental improvements were achievable in the business world before IT era using same exact means, through internal product management and imrovement of org procedures, applied to people and processes instead of pieces of software.

    • Yes, in principle nothing has changed since at least Fordian times - back then we had factory workers on one side and owners, managers and engineers on the other side, with the intermediate role perhaps being the foreman or something similar.

      I still think there is some difference in kind, not just degree: A human operational exec at least has to engage with the workers personally, witness the conditions they are working in, is exposed to complaints, etc. Even the most uncaring foreman is therefore forced into a position where he is subjected to accountability. He also has personal contact with the upper layer and can pass on that accountability to his higher-ups.

      In contrast, a software layer is physically unable to hear complaints and to pass them back up the chain. Because it's not a human, it cannot take accountability itself - however, it can still give higher-ups plausible deniability about "not having known" about problems. (A knock-on effect is also that it will prevent workers from even attempting to communicate the problem, because no one wants to talk to a wall)

      Therefore it creates an accountability sink where there was none in the old structure.

      (None in theory at least, of course there were enough other ways to be shielded from accountability even before computers)

      1 reply →

    • Nothing has changed for rich people who didn’t see their employees as people anyway. When you are the one stuck with a computer as your boss then tell me nothing has changed. Good luck getting a reference for a better job!

  • That’s what @vgr observed some time ago - people split into “above AI” and “below AI”, and the AI slowly moves up in the stack.

That's a really thought provoking article. And my thinking is this highlights the importance of government consumer protection agency/laws as the protection against that. I.e. when you fly through Europe or use European airlines, there is this EU law that gives you compensation of ~ 600 EUR if your flight is delayed by more than 3 hours or cancelled or whatever. This is a good insurance that no matter what BS is thrown at you at the airport by the company, you will get your compensation. And the process of getting the money is reasonably straightforward. What that gives is a way of avoiding any kind of airline systems, and just leads to the compensation. Also I hope that serves as an actual motivation for the airline to perform reasonably well, because otherwise they'll pay too much in fines. I think we really need this kind of protection laws in order to avoid the situation of chatbot-wall shielding companies from customers.

  •   And the process of getting the money is reasonably straightforward.
    

    Not always. The airlines often lie.

    • At least in my experience, I asked for compensation 3 times and got the money in every case (with different companies: American Airlines, United and Lufthansa). But I agree the system could be improved further.

      2 replies →

I experience this pretty often with the newfangled, automated government e-filing systems.

As a screen-reader-using person who cannot use pen and paper without assistance, I was once quite enamored by them, but I've changed my stance a bit.

The thing about pen and paper is that it accepts anything you put in, and it's up to a human to validate whether what you put in makes any sense. Computers aren't like that, if they tell you that the numbers in your application have to match up, you need to lie to the government to make them match up, even if you're a weird edge case where the numbers should, in fact, be slightly off and "inconsistent" with each other.

I called the local govt office responsible for this specific program, and they essentially told me to lie to the government in not so many words. Their system is centrally managed, they have no power of introducing updates to it, they wish they could fix it, but even they aren't empowered to do so.

When I was a grad student in STS I was considering doing a project on how software can function as an "agency adjuster" where individuals come to bear the risks of something (generally an economic transaction) and the majority of the profits go to the owner of the software. In many ways Uber & related services are about allowing individuals to take on very low-probability high-acuity downside risk for a small fee.

  • I think this sort of analysis is valid and fruitful in a very general sense. Software as a recently adopted vehicle in a long tradition of liability displacement / diffusion of responsibility / agency modification

    • Yah - it lies outside of the narrowly technical (though technical systems come up a lot) and part of what I would talk about would have been: how much is this a trick and how much of this is real? Like, is software doing slight of hand and really Uber (or whoever) should be taxed on an externality / risk? Or does this electronic machine of software genuinely create a new arrangement of responsibility? My unhelpful understanding is "it depends" and even in the Uber case it's a bit mixed, though on balance I think Uber is more of a scam than a truly new thing (even though there's some new there).

      1 reply →

  • Hmm, the analysis with respect to FOSS could also interesting. might make less sense to consider profit/compensation. Might be more useful to think of responsibility flows.. (or sources as well as sinks)

    • Yah - I think there's value there too. I am a huge fan of FOSS ofc - and also it's good to look at how FOSS allows companies to avoid hiring developers because they can use FOSS products. The first benefit I think of is trying to come up with FOSS approaches that would convince or coerce companies to contribute back to the projects they use at least a little.

      1 reply →

>The comparisons to AI are obvious, in as much as delegating decisions to an algorithm is a convenient way to construct a sink.

There is a flag on my LinkedIn account that bars me from getting a "follow-me" link on my profile.

No one of their support team knows why. No one knows since when. No one knows when it will change.

We are already living in this world.

Organizations exist to remove moral culpability

Judge, Jury and Executioner Firing Squad Limited Liability Organisation

Humans like to sleep at night. An emergent property of our rule of law is that it exists in a way to reduce the moral culpability of any individual. A police man, a jury member, a judge, a inspector, an executioner, a jailer, they all exist in very neat boxes. These boxes allow them to sleep at night. Surely the Judge has few qualms going by the recommended mandatory minimum, after the jury, who is assured the judge will provide a fair sentence, and the executioner doubly so, with double the potential moral hazard, is certain at least two other parties have done their due diligence.

these systems prevent a single actor from acting. More like they allow a series of hand offs, so by the time the jailer is slamming the doors shut, they are bereft of any investment in the morality of the outcome

The firing squad, with seven guns, all line up, with just one loaded. The rest are blanks. Each man can sleep at night, regardless if the murdered man was surely deserving of death

large institutions, organizations and objects are scale are fully inhumane

I would rather have my jailer be my judge and my executioner be each man or woman on the jury. Isolating each of these things allows the individuals to have almost a powerless notion of 'completing our task'. As if all tasks completed would add up to a moral outcome

Should juries be formed to perform the whipping of an individual, the institutionalization in their own homes, the judge forced to starve a prisoner in his cell, i find the outcomes would be different

  • The right to a trial by jury was specifically meant to prevent personal bias from the judge affecting the outcome of the trial.

    Letting juries perform executions and judges be responsible for the imprisonment of the guilty just creates a massive perverse incentive for sadistic individuals.

    You wrongly assume that either judge or jury would be more empathetic if they were burdened with the weight of these things. Instead, you'll get the people you least want in charge of these things doing them.

    Aside from the method of using blanks at executions, everything else about the system protects the convicted, not the people in the system itself.

    • not just the moral piece, but the passive nature. Once I do this, then it goes to the next person. I don't need wait for the taxpayer to come by and demand ransom for my work to send money off to the capital so they can fund wars in far away lands, it just comes right out of my paycheque

  • I agree with the description. But make no judgment of its morality or pretend to have a better system, no shred of humility in that.

    Of course you can run a political party suggesting that this division of powers thing was a step in the wrong direction, or better yet, take a trip to any of the dozens of countries with superdictators.

  • “The blood of the First Men still flows in the veins of the Starks, and we hold to the belief that the man who passes the sentence should swing the sword. If you would take a man's life, you owe it to him to look into his eyes and hear his final words. And if you cannot bear to do that, then perhaps the man does not deserve to die.”

  • Our meat is bought from the butcher, delivered to the chef so it comes not as an animal, but part of a tasty dish.

    If we eat meat, we should kill it ourselves.

    • I get a lot of shit for saying this, but I agree completely!

      I don’t think there’s anything inherently wrong with eating animals. But I have a particularly carnivorous friend who thinks hunting is for sociopaths, because he “loves animals”.

      If I wouldn’t harvest it, I won’t eat it. And I definitely would be too timid to slaughter a freaking cow lol

      4 replies →

  • > Each man can sleep at night, regardless if the murdered man was surely deserving of death

    Surely this line of reasoning requires the presence of omniscient judgement (ie the abrahamic god) to make sense. Otherwise all gunmen would (and should) assume practical responsibility

I was thinking about something similar today. Sometimes accountability can be a blocker, for example for hiring.

If you have 1 candidate, it's an easy call, if you have 3 candidates, you evaluate in less than a week. If you have 200 candidates, you need to hire somebody to sift through the resumes, have like 5 rounds on interview and everybody chiming in, whoever pulls the trigger or recommends someone is now on the hook for their performance.

You can't evaluate all the information and make an informed decision, the optimal strategy is to flip a 100 sided die, but no one is going to be on the hook for that.

  • > If you have 200 candidates, you need to hire somebody to sift through the resumes, have like 5 rounds on interview and everybody chiming in, whoever pulls the trigger or recommends someone is now on the hook for their performance.

    That's not how accountability works, in the traditional sense.

    What you described is Person A (accountable for hiring) hiring person B (responsible for screening and evaluating candidates). Person A is still accountable for the results of Person B. If Person B hired a sh*t candidate, it still lands on Person A for not setting up an adequate hiring system.

    Being accountable for something doesn't forbid you from delegating to other people. It is very common for 1 person to be accountable for multiple people's work.

    • heh never works that way. an experienced bureucrat like you describe always has a shit-deflecting canopy. so whatever decisions he personally took are never attributable to him personally.

      it just so happened.

      4 replies →

  • You can still be on the hook for rolling a 100 sided die. And in some cases that's effectively all you can do. At the end of the day it's a trolley problem (the real one, not deciding between two bad things, but looking at how people typically define reponsibility)

    One way or another you gotta own the decisions you make and deal with it. Even if the decision is to let someone else make the decision.

    The issue is that, yes, absolving yourself of accountability sure does free you to scale in ways previously thought unimaginable, it doesn't mean you absolve yourself of responsibility. The cure is keeping accountability in favor of scaling which means a much smaller scale to everything we have been doing.

    Another way to think about it. If you said you would give me 1 million dollars but I had to fully own up to what 1000 random people do in the next 24 hours I'd say thats a pretty raw deal. Basically no chance that a million will cover the chaos that a few of those 1000 people could cause. What some people do is take the million and then figure out how to rid themselves of the reponsibility.

    • > You can still be on the hook for rolling a 100 sided die. And in some cases that's effectively all you can do.

      Sure. And the article allows for that. You need to have "an account" that acknowledges that at the time you didn't and couldn't have enough information to completely de risk the decision, but that you'd discussed and agreed that the 1/100 (or 1/5 or 1/10,000) risk of the bad outcome was a known and acceptable risk.

      "where an account is something that you tell. How did something happen, what were the conditions that led to it happening, what made the decision seem like a good one at the time? Who were all of the people involved in the decision or event?"

Too focused on the bottom level. If a given business process results in employee A doing their job correctly according to the process, passing work to employee B doing their job correctly according to the process, passing work to employee C doing their job correctly according to the process, and the end result is shit, then the person who is accountable for the end result being shit is the manager who is responsible for the process itself. As more and more employees are involved, and the processes get more and more hierarchical (rather than "employee A", you have "middle-manager M"), then the person with accountability is higher and higher up the hierarchy, who also has more and more power and responsibility to fix it.

The idea of "unaccountable" failures only makes sense if both (a) the problem is so systemic that actually an executive is accountable, (b) the executive is so far removed in the hierarchy from the line employees doing the work that nobody knows each other or sometimes even sits on the same campus, (c) the levers available to the executive to fix the problem are insufficient for fixing the problem, e.g. the underlying root cause is a culture problem, but culture is determined by who you hire, fire, and promote, while hiring and firing are handled by "outside" HR who are unaccountable to the executive who is supposedly accountable. But really this is another way of saying that accountability is simply another level higher, i.e. it is the CEO who is accountable since both the executive and HR are accountable to the CEO.

No, you have to have an astoundingly large organization (like government) to really have unaccountability sinks, where Congress pass laws with explicit intent for some desired outcome, but after passing through 14 committees and working groups the real-language policy has been distorted to produce the exact opposite effect, like a great big game of telephone, one defined by everyone trying to de-risk, because the only genuine shared culture across large organizations is de-risking, and it is simply not possible to actually put in place both policy and real-life changes to hiring, firing, and promotion practices in the public sector to start to take more risks, because at the end of the day, even the politicians in Congress are trying to de-risk, and civil servants burning taxpayer money on riskier schemes is not politically popular, though maybe it should be, considering the costs of de-risked culture.

  • > Too focused on the bottom level. If a given business process results in employee A doing their job correctly according to the process... then the person who is accountable for the end result being shit is the manager who is responsible for the process itself.

    The book's point is that while this _should_ be the case, all too often it's not. AFAIK, nobody has been charged with forging documents in the case of Wells Fargo cross selling. Not the counter clerks who directly responded to incentives and management pressure nor the executives who built that system.

    • > nor the executives who built that system

      This is exactly why being an executive of a large organization is so incredibly difficult to pull off well. Sure, you can let your assistant fill your calendar with a bunch of meetings you don't want to be in to spend 95% of the meeting listening, 4% being the arbiter who tells people what they already knew they needed to do but refused to do it until asked by someone in authority, and 1% saying you'll take it further up the ladder. You will also fail hard because you will be constantly blindsided by people either fucking up (at best) or gaming (at worst) the processes for which you are responsible. Small example litmus test: in organizations that use Jira, whether the executives are comfortable with JQL and building their own dashboards to tell them what they need to know, or whether they expect their direct reports to present their work. If it's the latter, how can an executive be surprised that their reports are always coming in with sunny faces and graphs going up and to the right?

      That too many companies are not willing to hold executives accountable for processes that they are, in theory, supposed to be accountable for is an entirely different problem. The law proscribes, the officer arrests, and the judge presides, but all rests upon the jury to convict. If a company's "jury" is not willing to "convict", because the crime is one of negligence and not treason, then the company has larger problems and I'd like to short their stock, please.

    • Also, usually these situations involve intentionally (or not, depending on how charitable one is being) passing back and forth between different divisions/groups.

      So the only one with consistent power over all groups is an executive so high up the food chain (in some cases not even the CEO!) that they can plausibly claim ignorance.

  • I think "accountability" here was the wrong word to begin with. I believe they are more talking about "ability for feedback" or even better "just in time corrections". Feedback exists, but from my experience nobody reads those form submissions - maybe an AI these days that will create a summary... The latter is purposefully removed from all processes :(

This article seems to redefine the word "accountability". In the first sentence:

> In The Unaccountability Machine, Dan Davies argues that organizations form “accountability sinks,” structures that absorb or obscure the consequences of a decision such that no one can be held directly accountable for it.

Why not just call it "no-consequence sinks"?

It's somewhat of an oxymoron to say "accountability" isn't working because there's no consequence. Without any consequence there is no accountability. So why call it accountability in the first place?

This article is describing something along the lines of "shared accountability" which, in project management, is a well known phenomenon: if multiple people are accountable for something, then no one is accountable.

If someone is accountable for something that they can't do fully themselves, they are still accountable for setting up systems (maybe even people to help) to scale their ability to remain accountable for the thing.

  • I think it’s that the accountability falls into the sink and doesn’t reach the decision maker. I still find accountability poorly defined, even after the effort. Clicking through to the definition helps.

    It’s all kinda mushy. Being accountable is hearing and knowing a story. I don’t see why that has to correlate with decision power.

    The point of the article could be made much more clearly by talking about systems that leave decision makers not aware of the consequences of their decisions. All the anecdotes in the article fit that pattern.

    I think people don’t use the language of decision-consequences because it doesn’t capture an emotional aspect they’d rather not say out loud. They want the decision maker to feel their pain, they want the decision maker to hurt.

    Decision makers can be aware of how many unready rooms are caused by less cleaning staff, how many flights they’re cancelling. I’d actually bet they are. But that’s not enough, the harmed person wants to tell their story.

    • In the article, there are human agents involved at all times; sometimes people create accountability sinks even without humans.

      You're a neolithic farmer, and plant your barley, but that year there's a drought; you suffer the consequences, but who (or what) do you hold accountable?

  • Sounds like you perfectly understood the article. I don't get what you're complaining about. You agree but don't like the language?

  • Author is describing a specific phenomena different from shared accountability.

    • I disagree with the authors definition of accountability:

      > The fundamental law of accountability: the extent to which you are able to change a decision is precisely the extent to which you can be accountable for it, and vice versa.

      No.

      You can absolutely be accountable for something that you can’t change a decision about. Simple example: You’re a branding agency and you decide to rename X to Y. (No pun intended). The rebrand to Y fails. You’re accountable for the failure, but likely don’t have the ability to change anything by the time you know the results of your decision.

      Edit: ok, fair I agree. Bad example. A simpler example would be the person in the article continuing to point the the boss above them until there’s no one left. The chain would break somewhere along the way, but the broken chain is communication rather than one of accountability.

      The information may not reach the person able to make a change. But that doesn’t make them not accountable. If that person is unable to make a change because they’re in vacation for a month without anyone filling in, that person is accountable for both the results AND future results that are caused by not having someone monitor/reroute their acckuntability.

      5 replies →

Douglas Adams was here in 1982 with the invention of the SEP field

‘An SEP is something we can't see, or don't see, or our brain doesn't let us see, because we think that it's somebody else's problem. That’s what SEP means. Somebody Else’s Problem. The brain just edits it out, it's like a blind spot.

The narration then explains:

The Somebody Else's Problem field... relies on people's natural predisposition not to see anything they don't want to, weren't expecting, or can't explain. If Effrafax had painted the mountain pink and erected a cheap and simple Somebody Else’s Problem field on it, then people would have walked past the mountain, round it, even over it, and simply never have noticed that the thing was there.’

https://en.wikipedia.org/wiki/Somebody_else's_problem

I rather think accountability improved a lot. Esp. with the decline of buerocratic walls.

Accountibility always was down. Back in aristocracy you were never allowed to ask for support. Only in modern civilisation this improved. Middle management, the clueless in the Gervais principle, need their walls.

Don't be fooled by the decline of customer support in big orgs, like Google, Apple, or Amazon. They believe that support cannot scale, or if it's really needed, it needs to be outsourced to India or East Asia.

  • > They believe that support cannot scale, or if it's really needed, it needs to be outsourced to India or East Asia.

    I disagree. They believe that support shouldn't scale with the size of the business, and should provide economies.

    • Agree.

      It can scale - I worked in two huge companies dominating the world markets, and we did fine with global support - but they say this is not theirs business model. Well without competition they can do what they want, but customers prefer support and accountability. That's why most countries eventually came up with anti-trust legislation.

I feel like the article, or perhaps just the example, is missing the point.

>> a higher up at a hospitality company decides to reduce the size of its cleaning staff, because it improves the numbers on a balance sheet somewhere. Later, you are trying to check into a room, but it’s not ready and the clerk can’t tell you when it will be; they can offer a voucher, but what you need is a room.

This reads from the perspective of a person checking in. But it should read from the perspective of the person who made the decision.

The decision was made like this; On most days we have too many cleaners. If we reduce the cleaners we reduce expenses by x.

On some days some customers will need to wait to check-in. Let's move checkin time from 1pm to 2pm (now in some cases to 4pm) to compensate. n% of customers arrive after 4pm anyway. We start cleaning early, so chances are we can accommodate early checkin where necessary.

Where there's no room available before 4pm, some % will complain. Most of those will be placated with a voucher [1] which cost us nothing.

Some small fraction will declare "they'll never use us again". Some will (for reasons) but we'll lose a few.

But the savings outweigh the lost business. Put some of the savings into marketing and sales will go up. Costs remain lower. More profit.

There is perfect accountability of this plan - the board watches to see if profits go up. They don't care about an individual guest with individual problems. The goal of the business is not to "make everyone happy". It's to "make enough people happy" to keep profits.

[1] the existance of the voucher proves this possibility was accounted for.

So accountability in this case is working - except for the customer who didn't get what they want. The customer feels frustrated, so from their perspective there's a failure. But there are other perspectives in play. And they are working as designed.

  • The economic calculation is often an accountability sink too. We can say that the economy has spoken, profit was made, case closed.

    But we can also look for accountability in the political system. Maybe the hotel should be obliged by the law to pay real money instead of a voucher?

  • Came here to say this.

    And even in the case where the company's decision is arguably just "bad," it still might not be a problem from the company's point of view.

    Companies (including start-ups) create buggy products all the time and don't care, and aren't very responsive to requests for support, as long as money is coming in. I don't think they are using special accountability-flushing techniques. It takes real work, intention, experience, and power in a company to create feedback channels, and use them, and ensure that the customer has an experience of quality. It doesn't happen by magic or by default.

In my country they enacted this system for student management that is national.

It handles signups, restauration and housing services, grades, everything.

One example is that the grades are entered by professors and mistakes happen all the time, for everyone, due to the insane server load.

There's no one to complain to, because the excuse is always "it's the system, not us"

Tom Schelling's 'The Strategy of Conflict' touches on similar themes, but mostly in a more positive light.

One of his examples is that you should make yourself unavailable for contact, when you suspect someone is trying to blackmail you.

That's exactly the same severing of a link as described in the article.

  • > you should make yourself unavailable for contact, when you suspect someone is trying to blackmail you.

    Maybe I'm missing something, but how often does blackmail happen that it rises to the level of needing strategic advice like "make yourself unavailable" ?

    Who is Tom Schelling's audience?

    • > Who is Tom Schelling's audience?

      Politicians setting policies for use of nuclear weapons during the cold war, IIRC. Among others, at least.

      I read parts of that book many years ago, I recall the major theme is that voluntarily sacrificing control over the situation can be a powerful way to force the other party to do what you want. Like if you and me are playing "chicken", speeding towards each other and wanting the other to turn away first, you ripping out your steering wheel and throwing it out for me to see is a guaranteed way to force me to turn first and lose. This kind of stuff.

      I guess it ties into the larger topic here in that you can avoid being held accountable if you remove the ability to make any choices yourself.

      2 replies →

    • > Who is Tom Schelling's audience?

      Parents, of course.

      You might think I'm joking, but dealing with toddlers throwing tantrums is a prime example in some of his books.

Interesting. Wonder sometimes how much of consulting business is motivated by accountability avoidance - "accountability sinks" for hire

  • Consultant here: A lot of it.

    Ideally, a consultant is hired for their specialist skills, rare experience, sage advice on niche topics, etc...

    In practice, about half the work I do is to act as a lightning rod so that the guy with the power to sign the cheque for my time doesn't get fired if things go sideways. Instead, they can just blame me, shrug their shoulders, and hire another consultant.

    I've had a customer where I got "fired" for an "error". My coworker replaced me. Then he was fired, and I replaced him. We alternated like this for years. Upper management just saw the "bad" consultants get fired for their incompetence, they never noticed that we were the same two guys over and over.

  • I don’t know how much it works with consulting because someone has to approve paying them. If they do a bad job you can blame whoever brought them in. It does make it easier to do something you were going to do anyways though.

>> In The Unaccountability Machine, Dan Davies argues that organizations form “accountability sinks,” structures that absorb or obscure the consequences of a decision such that no one can be held directly accountable for it.

Government and Civil Servant are the biggest example. I guess its time to re-watch "Yes Minister".

  • That is a problem of organisation size, not so much whether the organisation is private or public.

    • I dont believe the largest company on planet earth by market cap has the problem as bad as government, even in a smallish country.

      4 replies →

Taleb's Skin in the Game seem to be related to this, but from a different optic. Goodhart's Law is also mentioned, but is not the core argument. In the end, is about agency, who have it, and system dynamics to get rid of responsibility.

  • True.

    The modern company is a very very limited liability company:

    - Cut corners so your jets crash and kill people (Boeing)?

    - Cheat on emissions testing so your product kill people (VW)?

    - Hush up drug trial results so you kills people (Pfizer)?

    - Sloppy security leads to hundreds of millions of people's personal data being leaked (too many to mention)?

    What happens to those in charge? Nothing. Perhaps they leave with a big golden handshake. If it's really bad, they get a don't do it again agreement with the Feds.

    No accountability means no feedback/skin in the game. So nothing gets better.

There was a leaked memo essentially instructing to form a committee when you make an illegal decision so that one person cannot be sent to jail. Does anyone remember this? I've had a hard time finding it

you can't have accountability in a world where you can be a good family member and work for a company that manufactures bombs that kill families. This doesnt compute in humans. They cant deal with that. If you cant handle accountability for that ... who cares about you getting into your hotel room a little later or missing a flight or not getting health insurance ... no biggie.

what you can have is a discussion about this or a blog post that is read by people and maybe some new subscribers, so no worries - all is not lost. :)

  • That suddenly reminded me of a Redditor talking about how some acquaintance of theirs works for a US military technology company, but is only involved in non-weapons research that has civilian applications.

    When pressed, the Redditor said that the their friend was working on the mathematical theory for computers to control planes that have no power, such as under emergency landing conditions. I.e.: If the engine dies, the auto pilot can help steer the plane onto a runway.

    "No, your friend is working on precision glide bombs. The emergency landing thing is just marketing to make it palatable. She might not even know, but that's definitely what she'd doing."

    That stuck with me: someone could be working on bomb technology and not even know it.

    Talk about an accountability sink!

Accountability sinks sounds a lot like the Toyota factory story, where on the contrary, every employee in the factory could pull the 'stop' lever if they thought there was a quality problem. Which of course drastically increased quality and feedback because the process is interrupted and stops.

But I don't think it is quite so black and white in the world. Because the legal system is also a way to give feedback to companies. And it can stop them in their tracks.

I liked this article a lot - it made me think about the ways large companies operate from a different viewpoint.

At the same time, though, I think it's a mistake to leave out the fact that, in many ways, modern society is just so fundamentally complex that we (as a society at large) deliberately forego demanding accountability because we believe the system is so complex that it's impossible to assign blame to a single person.

For example, given this is HN and many of us are software developers, how many times have we collectively supported "blameless cultures" when it comes to identifying and fixing software defects. We do this because we believe that software is so complex, and "to err is human", that it would be a disservice to assign blame to an individual - we say instead that the process should assume mistakes are inevitable, and then improve the process to find those mistakes earlier in the software lifecycle.

But while I believe a "blameless culture" is valuable, I think a lot of times you can identify who was at fault. I mean, somebody at CrowdStrike decided to push a data update, without verifying it first, that bluescreened a good portion of the world's Windows machines and caused billions in damages.

I just think that if you believe "accountability sinks" are always a bad thing, don't forget the flip side: would things always be better if we could always assign "root cause blame" to a specific individual?

  • I think you are conflating accountability and blame when I don't think those terms can be used interchangeably here. Accountability can be used as a way of assigning blame, but that isn't all that it is good for.

    Accountability, at least as presented here, is about feeeback between those affected by a decision and those making it. In a "blameless culture", people are still held to account for their decisions and actions but are not blamed for their results.

    I would argue that a blameless culture actually makes accountability sinks less likely to develop. In blameful cultures, avoiding accountability avoids blame, but that is not needed in a blameless culture.

    • Thanks, I found your response really helpful, and it helped identify some of the mistakes in my thinking. "In blameful cultures, avoiding accountability avoids blame, but that is not needed in a blameless culture." - that really made a lot of sense to me.

      1 reply →

    • A blameless postmortem culture says that when a human error is identified in the causal chain leading to an incident, there will be no consequences for the individual. In a sense it embraces blame but eschews accountability.

      1 reply →

Systems Theory would describe this as "intrinsic responsibility".

From Donella Meadows: “Intrinsic responsibility” means that the system is designed to send feedback about the consequences of decision-making directly and quickly and compellingly to the decision-makers.

  • Super interesting concept, because these complaints tend to end up at the bottom of top management backlog due to amount of time and attention required to analyse these. In real life it just does not work unless your org is in the low hundreds of customers.

    This could change if technology could solve aggregation and analysis problem, making ready-made decision propositions to management. High risk of this mechanism just becoming another accountability sink, though.

    Another solution is to build large organisations out of federated micro-orgs, where such intristic responsibility is feasible.

In the 400 blows, Truffaut plays a schoolteacher whose disciplinary methods in the classroom only accentuate the rebelliousness of a boy. At home, this trickles down and he decides to run and drink milk. At the end, we find him on a beach and the film ends.

Enforcing copyright law through an honest projection of 35mm film footage is a philanthropic endeavour. Making sure that every member of the production team, even the gaffers and stage hands take part in the exclusivity of re-capitalisation efforts, like the Fox complaint, is purely, legalistic sleight of hand.

I think this is related to the concept of “aligned incentives” in a way.

The chain breaks when incentives aren’t aligned and there’s a cascade of crap that roles downhill that seem like bad decisions. When, in fact, as the article points out, the decision made didn’t take knock on effects into consideration.

I’ve learned that seemingly poor or even terrible decisions almost always make sense in the context of when and where the decision was made.

What the article describes seems like a parallel concept (and an important one.) I wouldn't call them Accountability Sinks, though, as they seem more like Accountability Avoiders. Here are things we might think of as sinks in the real world:

- "Sin Eaters"

- Corporations, especially companies that are spun off and take on all the debt of the original company

- Voluntary stool pigeons (in criminal organizations, etc.)

- Certain religious martyrs

This is a small part of business reality. It's not clear how calling out a specific aspect really helps; indeed, it may hurt when business leaders learn how to do this more effectively. It's not at all clear that calling people irresponsible or unaccountable is actually effective at changing the transaction features; it likely makes things worse.

Transaction cost economics since the 1960's has been enumerating aspects like these, and showing they in fact determine the shape of business organizations and markets. Exported costs (implied in accountability sink) are mostly the rule rather than the exception.

What to do with them? A primary TCE finding is that if there were no transaction costs to adjudicating liability, it wouldn't matter from the social cost perspective where the liability lay (with the perpetrator or the victim) because they would adjudicate it down to their mitigation costs. As a result, the main policy goal for assigning liability (if you want to minimize the total cost to society) is actually to minimize and correct for adjudication transaction costs. (hence, no-fault divorce and car insurance)

The same dynamics are at play in the market and within organizations.

As a participant if your goal is your own profit, you can gain by making it harder to adjudicate and reducing the benefits thereof (hence binding arbitration, waivers, and lack of effective feedback). Doing so is becoming much simpler as virtual transaction interfaces and remote (even foreign) support afforded by software replace face-to-face interactions bound by social convention.

And who wouldn't want to? If you're head of customer support or developer relations, would you document your bugs or face the wrath of customers for things which can't change fast enough? You'd want to protect yourself and your staff from all the negativity. Indeed, with fixed salaries, your only way of improving your lot is to make your job easier.

To me the solution is to identify when incorporating the feedback actually benefits the participants. There, too, the scalability of virtualized software interfaces can help, e.g., the phone tree that automates simple stuff most people need and vectors complex questions to real people who aren't so harried, or the departing-customer survey querying whether it was price, quality, or service that drove one away.

You have to make accountability profitable.

I've observed a phenomenon in corporate accountability resembling quantum behavior:

1. Macro level: Departments claim broad accountability.

2. Micro level: Pinpointing task ownership causes accountability to vanish.

3. Indefinite states: Ambiguous tasks linger without resolution.

4. Entanglement: Dependent tasks inherit this ambiguity.

This creates a system where responsibility exists in superposition, tasks remain unresolved, and accountability becomes increasingly delocalized.

When stock you've held for years rockets to the moon, and your major banking institution with 100 billion dollar annual revenue experiences "technical difficulties" at the login page. You can't sell your meagre crumbs until the spaceship has completed its orbit and you're left with no doubt about who the system doesn't work for.

This hit home and very recent experiences. Was even in my little notebook of ideas to write about.... seems its already been done. Nice job... and you can almost paint all corps. with this brush, esp. decisions made by upper management that is incentivized on financial metrics that can be gamed....

It's a reasonable thing to do. An eye for an eye leaves the whole world blind. If you could demand responsibility from any grievance you have from anyone they could dement responsibility from you for the distress you caused them with your grievance.

My theory is all developed societies converge on having no accountability in the governing positions. Of course, I may and most like am wrong but if you look at say politics you must at least think about this being a real possibility

There has been a lot of talk of silos in the company I work for and the need to break them down. This looks like it could be a big part of why they have been so hard to tackle.

ahem at least in the Western world corporations were invented by the Church, an entity well known to me not accountable....look at the 94 apologies and the Pope's words before and after each apology....

By the way this is what bitcoin was set up to solve...notice it not being solved.

Yeah, but when you post the linkedin profile of the person doing the thing, people call you a scumbag. It's a hard social norm to move the dial on.

I find that the word "accountability" almost always obscures what's being talked about. If we remove it, we can instead talk about understanding and feedback:

As organizations become more complex, it's difficult to understand the consequences of many high-level decisions. Unless great effort is made to gather feedback, it won't happen.

Not only that: the lack of immediate, human communication results in one-way feedback mechanisms, like suggestion boxes and surveys. Many companies clearly want to make this work, because we're constantly prompted and sometimes paid to fill out surveys. But the result is survey fatigue.

The person giving feedback needs to be reassured (by people, not machines) that their feedback matters, or they won't be bothered to do it. Often, it's socially awkward to give negative feedback, so people don't. And often, the employees directly on the scene have incentive to encourage customers to avoid negativity when they fill out surveys.

One way to show that feedback matters is to respond to complaints with some sort of assistance. In the example in the article, that's a voucher. Perhaps somewhere in the organization, that voucher counts as a cost, but it's pretty unsatisfying.

In some organizations, managers are encouraged to work at the support desk occasionally as a more immediate way to understand what's going on. (I remember reading about how Craig Newmark would do this for his website.)

  • You're really good at bullshitting and saying nothing at the same time. Feedback and communication is pointless without accountability. I'm sure this sounded smart when you were typing it out but man, this is dumb. Like, this is literally what lack of accountability sounds like.