Comment by vrighter

3 days ago

Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.

It's about 5%.

Cost of cyberattacks globally[1]: O($trillions)

Cost of average data breach[2][3]: ~$4 million

Cost of lost developer productivity: unknown

We're really bad at measuring the secondary effects of our short-sightedness.

[1] https://iotsecurityfoundation.org/time-to-fix-our-digital-fo...

[2] https://www.internetsociety.org/resources/doc/2023/how-to-ta...

[3] https://www.ibm.com/reports/data-breach

  • > Cost of cyberattacks globally[1]: O($trillions)

    That's a fairly worthless metric. What you want is "Cost of cyberattacks / Revenue from attacked systems."

    > We're really bad at measuring the secondary effects of our short-sightedness.

    We're really good at it. There's an entire industry that makes this it's core competency... insurance. Which is great because it means you can rationalize risk. Which is also scary because it means you can rationalize risk.

But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.

Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.

  • depends on whether the fact that software can be finished will ever be accepted. If you're constantly redeveloping the same thing to "optimize and streamline my experience" (please don't) then yes, the advantage is dubious. But if not, then the saved value in operating costs keeps increasing as time goes on. It won't make much difference in my homelab, but at datacenter scale it does

    • Even the fact that value keeps increasing doesn't mean it's a good idea. It's a good idea if it keeps increasing more than other value. If a piece of software is more robust against attacks then the value in that also keeps increasing over time, possibly more than the cost in hardware. If a piece of software is easier to add features to, then that value also keeps increasing over time.

      If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.