Comment by vodou
1 day ago
Some thoughts regarding this:
1. It is partly because the typical metrics used for software development in big corporations (e.g., test coverage, cyclomatic complexity, etc) are such a snake oil. They are constantly misused and/or misinterpreted by management and because of that cause developers a lot of frustration.
2. Some developers see their craft as a form of art, or at least an activity for "expressing themselves" in an almost literary way. You can laugh at this, but I think it is a very humane way of thinking. We want to feel a deeper meaning and purpose in what we do. Antirez of redis fame have expressed something like this. [0]
3. Many of these programmers are working with games and graphics and they have a very distinct metric: FPS.
1. Totally agree that the field of software metrics is dominated by clueless or outright bad actors. I can say with complete certainty that I do not know the right way to measure software quality. All I know is that quality is handled as a metric in most hardware companies, not an abstract concept. When it’s talked about as such an ephemeral thing by software people, it strikes me as a bit disconnected to reality. (If I was going to try, I’d probably shoot for bugs per release version, or time from first spec to feature release.)
2. With respect: that’s a bit of an exceptionalist mindset. There’s nothing precious about software’s value to a business. It’s a vehicle to make money. That’s not to say craft isn’t important - it is, and it has tangible impacts to work. The point I’m making is that: my boss would laugh me out of the room if I told him “You can’t measure the quality of my electronics designs or my delivery process; it’s art.”
3. I’ve never heard of FPS but I’m very interested in learning more. Thanks for sharing the link.
Edit: oh ok duh yeah of course you could measure the frame rate of your graphics stack and get a metric for code quality. D’oh. Whoops. XD
> The point I’m making is that: my boss would laugh me out of the room if I told him “You can’t measure the quality of my electronics designs or my delivery process; it’s art.”
You can find some kind of objective metric, e.g. bug count or time spent developing new features. That alone is super hard to get right, but even if you could, it wouldn't necessarily tell you which techniques lead to a better result. People have tried studying such things (e.g. do static types help) and the studies rarely come up with any effect.
I don't think that's necessarily because these things don't have an individual effect, but because there are humans involved and so, personal ways of thinking probably play an outsized role, so technique X might be a very good fit for person A, but not for person B.