Comment by beaconstudios
5 years ago
This take is probably going to be controversial here, but I suspect that most metrics don't accomplish anything beyond giving control freak managers a sense of control or insight.
Most complex processes can't be reduced to a handful of simple variables - it's oversimplification at its worst. The best you can do is use metrics for a jumping-off point for where something /might/ be going wrong and thus start engaging with actual humans (or reading code/logs/some other source of feedback). Too often I've had to deal with management who go straight from metrics to decisions and end up making bad decisions (or wasting everyone's time shuffling paper to generate good looking metrics).
> This take is probably going to be controversial here, but...
You then state what seems to be the mainstream view on HN. Certainly I don't see it as controversial, just kind of obvious
I figure it to be controversial because I see the HN crowd as leaning more towards maths/reductionism/measuring than intuition/holism/feedback and while I'm specifically levying the anti-quantification argument against managers in this case it also applies to that approach in general.
A large percent of HN are software developers, and no developer wants to be held to some metric by some non-developer boss.
Yeah, but OTOH we're mostly software people who have seen first-hand what happens when you try to apply naive metrics to software. In other fields you're more likely to be right.
It gets even worse, the metric can harm. As suggested in the OP, if the number of lines of code you wrote was supposed to show your productivity, programmers will start optimizing for maximizing lines of code, which will make their code worse.
Goodhart's law rephrased by Marilyn Strathern: "When a measure becomes a target, it ceases to be a good measure" https://en.wikipedia.org/wiki/Goodhart%27s_law
Campbell's law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor" https://en.wikipedia.org/wiki/Campbell%27s_law
Yes precisely this, because you close your feedback loop not over the actual result you wish to achieve, but a crude numerical reduction which probably won't correct your actions effectively (as per your example, optimising for lines written rather than features shipped).
Good metrics were one of the bases for indusrial revolution: Ford was very strictly and methodically was decreasing the cost of manufacturing the cars and especially the hours of work required to manufacture a car and the number of incidents without decreasing the quality of the product. I think his book where he goes into the details is awesome.
The problem is that number of lines of code is a good metric only if it is decreased without decreasing code quality.
Creative work is nothing like industrial work and that's the problem. Programmers and designers aren't stamping parts repetitively all day so the system doesn't linearize in that manner.
Both are engineering and automation work actually, the only difference I see is the required initial capital to run an experiment in hardware vs software space and the marginal cost of distribution.
The smartest people who are stamping parts all day were the people who Ford promoted for higher positions to make the work more efficient. He wrote that most people were happy with the repeated work, but there were a few who were better as leaders or engineers.
Tesla’s growth curve is actually very similar to what Ford’s was at the start.
5 replies →
In order to get any insight into whether a chosen course of action is working or not, you need to be able to perform some type of measurement. All of these measurements are called metrics. The default single metric that every company has available to them is revenue, but really you want to have feedback loops that provide insight prior to performing a measurement to determine whether you're bankrupt or not. The more precisely you try to measure something, the more uncertainty and error you're going to introduce to your measurements, something that's true for all forms of empirical measurement.
If you point was that companies are generally bad at doing this, or that they often measure the wrong things, or that the process can be abused, or that you should not attempt to measure something beyond a certain level of precision, then I'd agree with you. But to write the entire process off as useless is just as unproductive as the problematic situation you're criticizing.
There's measurements (say, increase in customer retention as a % after a new feature is deployed) and then there's heuristics (discussing the feature with customers to gauge sentiment, being careful not to fall prey to bias or lead the customer's answers).
My point is that an obsession with empiricism can make you think that only #1 is valid evidence and thus use it for qualitative analysis where it should not be used.
Only using metrics for feedback is giving yourself tunnel vision.
I don't disagree that metrics can cause problems - but they could also helpful when working on difficult problems. I don't know of one that exists but there are times that a really tough nut lands on my desk and I can't bring back a solution for a few weeks A good metric would highlight the fact that, a week in, while I may have no solution to the problem, progress is being made.
Right now our metric is basically - talk to the developer and try and see if he's BSing you and goofing off, that's super subjective and very vulnerable to personal biases, but, it is a metric - it's just not an objective metric.
I don't know what it is - I've never seen evidence of a good one out there - but I don't begrudge managers trying to find new objective measures for productivity. I'd be quite excited to see one myself.
The mistake is in trying to quantify a qualitative issue. Trying to reduce progress building a program to the number of lines and such. It inherently doesn't make any sense and it's not possible to accurately represent such things as a number or collection of numbers without losing all the detail (and thus, being wrong).
The idea that only truths expressable in abstract equations are objective and thus true is exactly the kind of false belief that gets us in trouble.
> Right now our metric is basically - talk to the developer and try and see if he's BSing you and goofing off, that's super subjective and very vulnerable to personal biases, but, it is a metric - it's just not an objective metric.
That isn't a metric. Metric, having the same root as metre, is about measuring. What you're talking about there is a heuristic, and they're much more effective for tracking qualitative issues.
So, what would you do differently? Say you run an organization with 200 engineers all with different levels of skill. You have a budget, maybe a year of runway, and a set of deliverables.
How would you, as a leader, keep track of how your organization is running?
The Streetlight effect [0]. Just because you think that it is the only place that you can see anything doesn’t mean that it is meaningful to look there. A number with high precision doesn’t mean it will tell you anything meaningful. Some problems just don’t have any easy solutions.
So in your example, you just have to rely on the judgement of all your professional project leaders and architects and what they tell you.
[0] https://en.wikipedia.org/wiki/Streetlight_effect
By implementing a systems solution similar to Stafford Beer's VSM (https://en.m.wikipedia.org/wiki/Viable_system_model). Or to ovrrsimplify the idea, self-managing teams which integrate with their environment for feedback and management for direction (which I believe is the agile/lean practices done properly).
The specific approach to metrics I was referencing as being better is known in cybernetics as an algedonic alert. It doesn't seek or claim to provide information, it only rings the bell of "investigate this area", like a CloudWatch alert for your organisation.
Using metrics to make decisions is the mistake in my mind.
Unfortunately, work from home situations make some managers more eager to have pre-defined itemized metrics that they can understand.
In my work, we shifted to online project management tools (without training, dare I say), which is just additional work on top of actually getting things done (and balancing with increased household maintenance, which nobody talks about).
Worst, we had also wasted meeting hours (everyone's time) just defining how to define our progress.
I'm not sure why you would think this is controversial. Fot example, Goodhart's law is commonly cited around here and its saying roughly something similar.
Goodharts law applies to using a metric as a target. I'm talking about metrics being bad for measuring because they inherently overgeneralise.
Metrics are not an end, but they can be a start... and they are a necessary part of the feedback loop.
Metrics are not the only sensory organ of the organisation. This is exactly the category error I'm pointing at.