Comment by adrianratnapala
8 years ago
Programs a fast when they don't force the computer to do much stuff, so speed overlaps (albeit imperfectly) with simplicity of code. Mostly this explains why programs start fast and then slow down as they cover more use-cases. But some optimisations also amount simplifying an existing system.
For example: as you get to know your use-cases better you might simplify your code to sacrifice unwanted flexibility. Or you might replace a general-purpose data structure with a special purpose one that not just faster, but concretely embodies the semantics your desire.
A case, that is not quite a simplification is removing code re-use. Instead of using function in three different ways, you use three separate optimised functions. Now changes to one use case don't cause bugs in the others. That's the kind of thing that quotemstr meant by "making logic orthognal".
>A case, that is not quite a simplification is removing code re-use. Instead of using function in three different ways, you use three separate optimised functions. Now changes to one use case don't cause bugs in the others. That's the kind of thing that quotemstr meant by "making logic orthognal".
And which is what I mean by making systems more brittle, less portable and less maintainable.
You find a corner case in the original function that's not covered, now instead of fixing it in one place you need to fix it in three places with all the headaches that causes.
So the next maintainer thinks: "Gee I can fix this by bringing all these functions together".
So if they're using an oo language they make an abstract base class from which the behaviour is inherited, or a function factory otherwise.
So now you're back to a slow function with even more overhead, that's even harder to debug.
So the next maintainer comes around and thinks: "Gee I can speed this up if I break out the two functions that are causing 90% of the bottleneck".
Now you have 4 completely independent functions to keep track of.
Repeat ad-nauseum.