← Back to context

Comment by nialse

3 days ago

The trade off between utilization and latency is rarely understood in organizations. Little’s law should be mandatory (management) reading. Unused capacity is not waste, but buffers that absorb variability, and thus keeps latency down.

This is the exact mental model I was looking for.

It reminds me of Kingman's Formula in queueing theory: As server utilization approaches 100%, the wait time approaches infinity.

We intuitively understand this for servers (you never run a CPU at 99% if you want responsiveness), yet for some reason, we decided that a human brain—which is infinitely more complex—should run at 99% capacity and still be expected to handle urgent interruptions without crashing.