← Back to context

Comment by hansvm

9 months ago

> sheer rigidity

That looks more like a communication style difference than anything else. Uncle Bob's talks and writing are prescriptive -- which is a style literally beaten into me back when I was in grade school, since it's implied just from the fact that it's you doing the speaking that you're only describing your opinions and that any additional hedging language weakens your position further than you actually intend.

If you listen to him in interviews and other contexts where he's explicitly asked about dogmatism as a whole or on this or that concept, he's very open to pragmatism and rarely needs much convincing in the face of even halfway decent examples.

> animus toward comments

Speaking as someone happy to drop mini-novels into the tricky parts of my code, I'll pick on this animus as directionally correct advice (so long as the engineer employing that advice is open to pragmatism).

For a recent $WORK example, I was writing some parsing code and had a `populate` method to generate an object/struct/POCO/POJO/dataclass/whatever-it-is-in-your-language, and as it grew in length I started writing some comments describing the sections, which for simplicity's sake we'll just say were "populate at just this level" and "recurse."

If you take that animus toward comments literally, you'll simply look at those comments and say they have to be removed. I try to be pragmatic, and I took it as an opportunity to check if there was some way to make the code more self-evident. As luck would have it, simply breaking that initial section into a `populate_no_recurse` method created exactly the documentation I was looking for and also wound up being helpful as a meaningful name for an action I actually wanted to perform in a few places.

That particular pattern (breaking a long method into a sequence of named intermediate parts) has failure modes, especially in the hot path in poorly optimized runtimes (C#, Java, ..., Python, ...), and definitely in future readability if employed indiscriminately, but I have more than enough experience to be confident it was a good choice here. The presence in my mind of some of Uncle Bob's directionally correct advice coloured how I thought about my partial solution and made it better.

> other animus

- Stylistic refactors that induce performance regressions can be worth it. As humans, we're pre-disposed to risk avoidance, so let's look at an opposite action with an opposite effect: How often are you willing to slow down feature velocity AND make the code harder to maintain just to squeeze out some performance (for a concrete example, suppose there's some operation with space/time/bandwidth tradeoffs which imply you should have a nasty recursive cte in your database to compute something like popcount on billion-bit-masks, or even better just rewrite that portion of the storage layer)? My job is 80% making shit faster and 10% teaching other people how to make shit faster, but there are only so many hours in the day. I absolutely still trade performance for code velocity and stability from time to time, and for all of those fledgeling startups with <1M QPS they should probably be making that trade more than I do (assuming it's an actual trade and not just an excuse for deploying garbage to prod).

- The "tortured method names" problem is the one I'm most on the fence about. Certainly you shouldn't torture a long name out of the ether if it doesn't fit well enough to actually give you the benefits of long names (knowing what it does from its name, searchability), but what about long names which do fit? For large enough codebases I think long names are still worth the other costs. It's invaluable to be able to go from some buggy HTML on some specific Android device straight to the one line in a billion creating the bug, especially after a couple hiring/firing sessions and not having anybody left who knows exactly how that subsystem works. I think that cutover point is pretty high though. In the 100k-1M lines range there just aren't enough similar concepts for searchability to benefit much from truly unique names, so the only real benefit is knowing what a thing does just from its name. The cost for long names is in information density, and when it's clear from context (and probably a comment or three) I'm fine writing a numeric routine with single-letter variable names, since to do otherwise would risk masking the real logic and preventing the pattern-recognition part of your brain from being able to help with matters. HOWEVER, names which properly tell you what a thing does are still helpful (the difference between calling `.resetRetainingCapacity()` and `.reset()` -- the latter you still have to check the source to see if it's the method you want, slowing down development if you're not intimately familiar with that data structure). I still handle this piece of advice on a case-by-case basis, and I won't necessarily agree with my past self from yesterday.

> "Uncle Bob devotees" vs "Uncle Bob"

This is maybe the core of your complaint? I _have_ met a lot of people who like his advice and aren't very pragmatic with it. Most IME are early-career and just trying to figure out how to go from "I can code" to "I can code well," and can therefore be coached if you have well-reasoned counter-examples. Most of the rest IME like Uncle Bob's advice but don't code much, and so their opinions are about as valuable as any other uninformed opinion, and I'm not sure I'd waste too much time lamenting that misinformation. For the rest of the rest? I don't have a large enough sample I've interacted with to be very helpful, but unrelenting dogmatism is pretty bad, and people like that certainly exist.

Thanks for the thoughtful response. I generally don't want to get into the specifics of what Martin advocates for. Whether to prefer or eschew comments, give methods a particular kind of names, accept a performance penalty for a refactor--those are all things that are good or bad in context.

I think a lot of engineers hear "there's a time and a place" or "in context" and assume that I'm saying that the approach to coding can or should differ between every contribution to a codebase. Not so! It's very important to have default approaches to things like comments, method length, coupling, naming, etc. The default approach that makes the most sense is, however, bounded by context, not Famous Author's One True Gospel Truth (or, in many cases, Change-Averse Senior Project Architect's One True Gospel Truth). The "context boundary" for a set of conventions/best practices is usually a codebase/team. Sometimes it's a sub-area within a codebase. More rarely, it's a type of code being worked on (e.g. payment processing code merits a different approach from kleenex/one-off scripts). Within those context boundaries, it's absolutely appropriate to question when contributors deviate from an agreed-upon set of best practices--they just might not be Martin's best practices.

Rather, the core of my critique is that Martin's approach lacks perspective. Perspective/pragmatism--not some abstract notion of "skill level in creating well-factored code according to a set of rules"--is the scarce commodity among the intermediate-seeking-senior engineers that Martin's work is primarily marketed toward and valued by.

From there, I see two things wrong with Martin's stance in the Osterhout transcript:

"Out of touch" was not an arbitrarily chosen ad-hominem. When Osterhout pressed Martin to improve and work on some code, Martin's output and his defense of it were really low-quality. I can tell they're really low quality because, in spite of differing specific opinions on things like method length/naming/SRP, almost everyone here and to whom I've showed that transcript finds something seriously wrong with Martin's version, while the most stringent critique of Osterhout's code I've seen mustered is "eh, it's fine, could be better". That, and Martin's statements around the "why" of his refactors, indicate that the applicability of his advice for material code quality improvements in 2025 (as opposed to, say, un-spaghettification of 2005 PHP 5000-line god-object monstrosities) is in doubt. On its own, that in-applicability wouldn't be a massive problem, which brings me to...

Second, Martin is a teacher. When you mention '"Uncle Bob devotees" vs "Uncle Bob"' and I talk about the rigidity I see in evidence among people that like Martin, I'm talking about him as a teacher. This isn't a Torvalds or Antirez or Fabrice Bellard-type legendary contributor discussing methodological approaches that worked for them to make important software. Martin is first and foremost (and perhaps solely) a teacher: that's how he markets himself and what people value him for. And that's OK! Teachers do not have to be contributors/builders to be great teachers. However, it does mean that we get to evaluate Martin based on the quality of his pedagogical approach rather than holding the ideas he teaches on their own merit alone. Put another way, teachers say half-right things all the time as a means of saving students from things they're not ready for, and we don't excoriate them for that--not so long as the goal of preparing the students to understand the material in general (even if some introductory shortcuts need to later be uninstalled) is upheld.

I think Martin has a really poor showing as a teacher. The people his work resonates the most strongly with are the people who take it to the most rigid, unhealthy extremes. His instructorial tone is absolute, interspersed with a few "...but only do this pragmatically of course" interjections that he himself doesn't really seem to believe. His material is often considered, in high-performing engineering departments, to be something that leaders have to check back against being taken too far rather than something they're happy to have juniors studying. Those things speak to failures as a teacher.

Sure, software engineers are often binary thinkers prone to taking things to extremes--which means that a widely regarded teacher of that crowd is obligated to take those tendencies into account. Martin does not do this well: he proposes dated and inappropriate-in-many-cases practices, while modeling a stubborn, absolutist tone in his instruction and responses to criticism. Even if I were to give his specific technical proposals the greatest possible benefit of the doubt, this is still bad pedagogy.