← Back to context

Comment by kragen

2 days ago

Basically this happens because the DVLA and the stock market don't have any competition. Customers in a competitive market won't be angry when you need to be offline for 12 hours every Sunday morning; they'll just switch to your competitor some Sunday, because the competitor is providing them something they value that you don't provide.

The stock markets definitely have competition. For instance Frankfurt, London, Paris or Amsterdam very much compete with each other to offer desirable conditions for investors, and companies will move their trading from one to another if it is their interest. I think the fact they close at night is a self-preservation mechanism, traders would become insane if they had to worry about their positions 24/7.

  • There's a very strong network effect, and most stocks are only listed on a single stock exchange, so in most contexts the competition is very minimal.

Maybe they should regulate Sunday trading hours, or unionized sysadmins should negotiate the end of on-call hours.

The red queen's race that you describe for ever-greater scale, ever-greater availability is an example of the tragedy of the commons. Think how much money and many human minds have been wasted trying to squeeze out that last .0001% of "zero downtime" when they could have been creating something new.

"Keep doing the same thing, but more of it, harder" is a recipe for a barren world of monoculture.

  • Bergen county NJ has blue laws that make it so non-grocery stores must be closed on Sunday’s. Maybe there’s some value in structuring a time where everybody is off?

    Just like at work the only time I really get off is when all of my customers are off. It’s nice when the industry sorta shuts off for a week or so around christmas

  • Something like that might plausibly be correct, though you've exaggerated it to a level where it's clearly false.

    If we steelman it to its most defensible essence, I think what you're saying is that the cost of the human effort needed to provide these higher uptimes exceeds the consumer benefit (the value of being able to buy a camera on Saturday), say. You could imagine, for example, that each incremental improvement in uptime wins over a proportion of the customer base providing a value that vastly exceeds its cost — but only until your competitors improve their own offering to match, so all the surplus from all this uptime improvement ultimately goes to the consumers, not the producers.

    There are two related holes in this idea.

    The first is that producing consumer surplus is what the economy is for, in a moral sense. The reason producing goods and services is a good thing to do is so that someone will benefit from using them! So if all the effort that sysadmins make goes into making services better for users, that's a good thing, not a bad thing.

    The second is that nothing is stopping a new entrant from offering a new, low-cost service that isn't as reliable. If the cost of providing all that extra reliability (bundled into the incumbents' pricing scheme) is higher than the actual benefit to users, the users will switch to the lower-cost, less-reliable service. This has happened many times, in fact: less-reliable minicomputers stole business from mainframes, less-reliable VoIP stole business from ATM and SONET and SDH, all kinds of less-reliable plastic goods have stolen business from all-metal versions, and now solar panels are stealing business from coal power plants even though solar panel "uptime" is like 30%.

    So the particular market dynamics we're talking about actually sensitively optimize the amount of effort given to uptime to the economic optimum. There do exist lots of market failures, but the particular dynamic we're discussing is the opposite extreme from something like a dollar auction.

  • Who is trying to achieve zero downtime? Facebook has degraded service regularly it's just close enough to 99.9 that nobody cares.

    If loading my messages times out I just move onto something else and go back a few minutes later.

    Surely they have metrics measuring that and don't think it's worth the engineering effort to improve it.

    • One of the interesting things that came out of Google's "SRE" system is that they deliberately add outages if they don't have enough. They learned years ago that if you build a service that promises 99% uptime and deliver 99.99% uptime, other people in the company will come to depend on that 99.99% uptime unintentionally. So they chaos-monkey it to ensure that the inevitable failures aren't catastrophic.