Comment by shoo
3 days ago
There's often a tension between efficiency (say maximising throughput, or minimising latency) and robustness (being able to cope with shortages of inputs, demand shocks, work around failures). The world got to experience a bunch of logistical examples of this around COVID-19, but there's examples everywhere. Having a whole 2nd engine on a passenger plane seems wasteful, until the first engine fails.
When attempting to apply a process optimisation perspective from supply chains or manufacturing to software delivery, one way the software delivery problem space differs is that the software delivery process isn't a process that produces a stream of identical units that are independent of each other.
Suppose we abstract the software situation, we can tell ourselves that it is a repeatable process that produces an endless stream of independent features or fixes (weighed in "story points" say) that get shipped to production. This mental model maybe works some of the time, until it doesn't.
In reality, each software change is often making a bespoke, one-off modification or addition to an existing system. Work to deliver different features or fixes are not fungible and delivering the work items may not be independent -- if changes interfere with each other by touching overlapping components in the existing system and modifying them in incompatible ways. A more realistic mental model needs to acknowledge that there's a system there, and its existing architecture and accumulated cruft may heavily constrain what can be done, and that the system is often a one-off thing that is getting changed in bespoke ways with each item of work that ships.
No comments yet
Contribute on Hacker News ↗