Clean Code vs. A Philosophy Of Software Design

9 months ago (github.com)

It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.

Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

And it's not just programming styles, patterns and idioms. It's arguably even worse when it comes to tech stacks and solution architecture.

It's super-frustrating when I'm dealing with people in a professional setting and they're quick to point out something they read in a book, or even worse - a blog - with very little else to add.

This was especially bad during the NoSQL and Microservice hype. Still somewhat feeling it with PAAS/SAAS and containerization. We have so many really really basic things running as Function Apps or lambdas, or simple transformations running in ADF or Talend that add zero value and only add to the support and maintenance overhead.

Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it. And that their opinions were written down don't make them fact. Apply your own mind and experience.

  • I cringe thinking about PR comments I left early in my career.

    "akshually this should try to follow more SOLID principles"

    But, coming from a formal engineering background, I thought this is what it meant to be a professional software engineer. Little did I know these "principles" were just the musings of a consultant lol. Turns out most folks have good intentions and want a standardized way to write code, but for some reason it always results in code that looks like the Enterprise FizzBuzz meme repo.

    • For some reason in software there seems to be an incredibly large space for non-evidence based thinking and belief systems.

      I wonder if that's because in a lot of cases (depending on the domain) the space of possible valid/working solutions is near infinite, and if you don't have hard requirements that are backed up by measurements you're free to concieve of any valid system structure and justify it as 'better' without that ever being something that can be observed and measured.

      4 replies →

    • The mark of a good engineer is knowing when this sort of handwaving is actually meaningful and helpful. Formality for its own sake is anti-pattern, but who am I telling?

  •     > It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.
    

    IMO, this is one of the key differences between the two books. CC has a vibe of hard and fast opinion-based rules that you must obey, whereas APoSD feels more like empirically-derived principles or guidelines.

    • APoSD is written by a highly respected computer scientist with a tremendous list of technical achievements and also a strong teaching history as a professor, while CC was written by someone whose resume is primarily related to writing about software, not writing software.

      1 reply →

  • I think it stems from fundamental misunderstandings about what it is one is actually trying to do when writing code.

    Coding is about building a computable model of some facet of existence, usually for some business. When it comes to model building, comprehension and communication are paramount. Performance and other considerations are also important but these are arguably accidental features of machines and, in an ideal world, would not actually affect our model.

    Similarly, in an ideal world, we wouldn't even need programming languages. We'd be able to devise and explain computational systems in some kind of perfect abstract language and not need to worry about their realization as programs.

    I think a lot of these blanket philosophies confuse people by not emphasizing the higher level aspects of the activity enough. Instead people get hung up on particular patterns in particular paradigms/languages and forget that the real goal is to build a system that is comprehensible to the community of maintainers that need to work with it.

    • It seems that each software design/development system, ideology, and practice has a good reason it was created, and has certain inherent benefits. Each may solve (or at least help with) some common problem.

      For instance, abstraction is good and short methods are good to some extent (who wants to read a 2000-line function?), but as John points out in the article, these can be taken too far, where they create new and perhaps worse problems.

      It seems there's a pendulum that swings back and forth. We go from big up front design, to Extreme Programming, to a pervasive object-oriented design culture, back to other paradigms.

  • Yes, Uncle Bob is certainly capable of being pedantic. A friend of mine, a Smalltalk Consultant, partnered with him for a while. "With Uncle Bob, it's his way or the highway."

    His clean code work is certainly pretty dogmatic. As I recall, he says that Java is not object oriented.

    But if my memory serves me correctly, his book about C++ (Designing Object-Oriented C++ Applications Using the Booch Method) has some excellent parts. His description of the difference between a class and an instance is one of the better ones.

    Then there is the famous sudouko puzzle incident, in which a student trying test-driven development can't get the solution. It is a very instructive incident which illustrates the TDD is unlikely to help you solve problems that are beyond incremental changes. Peter Norvig's solution makes that very clear. Uncle Bob does not seem to realize that.

    > Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

    But I admit in my youth, I was pretty dogmatic about languages and development practices, so I've been that guy.

    • >> ... TDD is unlikely to help you solve problems that are beyond incremental changes.

      Thank you for expressing this niggling problem with TDD. Personally I just cannot use it for "new stuff", I need to explore and create direct with "real" code for anything non-obvious.

      4 replies →

    • > Java is not object oriented.

      Java technically isn't OO in the strictest sense (Smalltalk, Ruby). It is OO in the modern sense (where modern >= 1980s, C++). Though I am not sure if this is what Bob is referring to - I don't have any respect for the man or his ideas, so my biased guess is his definition of OO is shared only between him and his fans.

    • >But if my memory serves me correctly, his book about C++ (Designing Object-Oriented C++ Applications Using the Booch Method) has some excellent parts.

      If my memory serves me correctly, Grady Booch himself had a book with roughly the same title, except that his name would not be in the title, of course, but would be there as the author. I think I read a good amount of it long ago, and liked it.

      Edit: I googled, the book is mentioned here under the section Booch method :

      https://en.m.wikipedia.org/wiki/Grady_Booch

    • Bob's had a long life with too much success. He really believes in himself. But, I have to say that the other guy was aggressive and bad even though I am more inclined to agree with him. He willfully misrepresented Bob's ideas. I thought he presented more misguided certainty than Bob. No Bueno.

      2 replies →

    • Ah, yes. The famous Sudoku solver controversy.

      In 2006 Ron Jeffries wrote four blogs about solving Sudoku in Ruby with TDD. His blogging effort ended before he completed the solution. I think he got interested in something else and left the whole thing hanging.

      That same year Peter Norvig wrote a Sudoku solver in Python using a constraint based approach. You can see their respective documents here. https://ronjeffries.com/categories/sudoku/… https://norvig.com/sudoku.html

      The anti-TDD lobby, at the time, hailed the two documents as proof that TDD didn't work. Ha Ha, Nya Nya Boo Boo.

      I was aware of this silliness, but never bothered to study it. I had better things to do. Until March of 2020. Then I thought I'd use Sudoku as a case study for Episode 62 in http://cleancoders.com.

      I had not read either of the previous documents and decided to maintain that ignorance while writing the solver in Clojure using TDD. It turned out to be a rather trivial problem to solve. You can see my solution in http://github.com/unclebob/sudoku

      I don't know why Ron stopped blogging his Sudoku solver in 2006; but he picked it up again in 2024 and has written much more about it.

      The trick I used to solve Sudoku with TDD was to consider the degenerate cases. Sudoku is usually a 3x3x3x3 grid. Let's call this a rank-3 problem. I started with a rank 1 problem which is trivial to solve. Then I moved on to a rank 2 problem which was relatively simple to solve; but was also very close to a general solution. After that I could solve rank N problems.

      The TDD strategy of starting with the most degenerate case (rank 1) and then gradually adding complexity may not have been well known in 2006. TDD was pretty new back then. If you explore Ron's first four blogs you can see that he briefly considered rank 2 but opted to go straight into the rank 3 case. The sheer number of variables for each test (81) may have played a role in his loss of interest. In my case (rank 2) I had far fewer variables to deal with.

    • Wasn’t that Ron Jeffries who failed to solve that?

      I think that says more about the person at the keyboard and their lack of familiarity with the solution space than anything about TDD per-se. You still need insight and design with TDD, blind incrementalism was never a good idea.

      2 replies →

  • To restate something I've said here last month:

    I'm fond of saying that anything that doesn't survive the compilation process is not design but code organization. Design would be: which data structures to use (list, map, array etc.), which data to keep in memory, which data to load/save and when, which algorithms to use, how to handle concurrency etc. Keeping the code organized is useful and is a part of basic hygiene, but it's far from the defining characteristic of the craft.

    • I disagree entirely. Design is fundamentally a human-oriented discipline, and humans work almost exclusively with code before it is compiled. A strong shared mental model for whatever we're doing is as much a part of software development as any code that runs on a computer.

      Programming languages can (should!) be amazing tools for thought rather than just tools for making computers do things; using these tools to figure out what we're doing is a critical part of effective development. The highest-leverage software engineering work I've seen has involved figuring out better ways of thinking about things: developing better tools and abstractions. Tools and abstractions compound since they fundamentally impact everything built on top of them. A good high-level design is the difference between a team that can add some specific capability in a day, a team that would take six months and a team that would say it cannot be done.

      1 reply →

    • My take is that the book also works as a source of authority for aspiring SSR and SR devs.

      Comments about code style are usually subjective, and, they can be easily dismissed as a personal preference, or, in the case of a Jr dev, as a lack of skill.

      Until they bring up "The Uncle Bob book". Now, suddenly, a subjective opinion from a Jr dev looks like an educated advice sourced from solid knowledge. And other people now have a reason to listen up.

      All of this is totally fabricated, of course. But it's like the concept of money. It's valid only because other people accept it as valid.

      7 replies →

    • > Keeping the code organized is useful and is a part of basic hygiene, but it's far from the defining characteristic of the craft.

      I'm with you, but I don't think it makes sense to elevate one absolutely over the other as the "defining characteristic." Either one can tank the development of a piece of software and prevent it from coming into being in a useful way.

      Arguments about which aspects of software are more important than others usually arise between people who have personally suffered through different ways that projects can fail. Any aspect of software development will feel like the "defining characteristic" if it threatens to kill your project.

      2 replies →

    • Granted, while I program a lot, I'm not employed as a programmer per se. My impression is that programming is easy and fun, but software develoment is hard and laborious. Things like hygiene are among the differences between the two.

      1 reply →

    • Systems need to be able to handle all kinds of stresses placed on them during their useful life. The runtime bytecode/machine code/config is what deals with the actual running of the system. The code is what deals with the engineers making future modifications to it. The monitoring system deals with being able to allow operators to ensure the system stays up. All of these affect the reliability and performance of the deployed system during its lifetime. All of them are a part of the design of the system.

    • > code organization

      It's also the code documentation.

      Having documentation that is legible is good, right? And so a reviewer is reasonable to say "this is hard to read" since it's failing at its primary purpose.

  • Professionals in other industries don't "just" write books. In a sense that usually the field has several acclaimed authors and they put some solid work into ensuring their books make sense. While there are disagreements in other fields, or some nonsense conventions, the conventional wisdom is usually at least good enough to make you a good professional.

    In programming it's the Wild West. Many claims are made based on nothing at all. It's very rare to see any kind of sensible research when it comes to the science part of CS. But following rules makes life easier. Even if rules are bad. That's kind of why conservatism exists as a political idea.

  • > Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it. And that their opinions were written down don't make them fact. Apply your own mind and experience.

    But that difference is actually huge. I think you are downplaying the value of the writing process. Assuming that writer is acting in good faith and truly tries to provide the best possible information.

    But when you start writing, you start noticing that this idea might not be that good after all. Maybe I need to read more about this? Did you note everything? This idea conflicts with this other topic than I just wrote? What is correct? And the list goes on. When you structure all your thoughts as written text, it is easier to detect all conflicting ideas and mistakes. Not all writers are that good, but you should understand what I mean.

    • Writing is an excellent way to determine your opinions. There's a large gap between ideas and those formed when writing said ideas.

  • This is just how junior and intermediate devs behave. It’s like a goth phase or something.

    It goes along with being into BJJ, chess, vim, keto, linters, and “the dominance hierarchy”.

    It’s annoying, but most everyone went through it. If you didn’t know better, how could they?

  • > Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

    Not once in my 11 year career. But almost every codebase I've worked on has had debilitating maintainability issues because the only principle other engineers seemed to follow was DRY, at the sacrifice of every principle in SOLID.

    • Most code that i clean up is a lot easier to maintain after making it a lot DRYer.

      The point is not about being DRY, on itself, though. The point is that the code then has better abstractions which are easy to reason about.

      UB seems to take abstractions a lot too far, replacing e.g. 2 lines of very clear code with some cleartotals abstraction.

      3 replies →

    • Lucky.

      I've had one which violated DRY and every SOLID principle…

      Well, Liskov might not have been violated, but it was hard to tell what with all the other nonsense in the 120 kloc of copy-pasted pantheon of god-classes that showed flagrant disregard for things so fundamental that you wouldn't think anyone even could even get them weird, e.g. the question of "how are properties defined" being "solved" by having an array which was indexed by named constants… and because it was a god class, which items in that array ever got instantiated depended on which value was passed to the constructor.

      Eventually, I found they'd blindly duplicated an entire file, including my "TODO: deduplicate this method" comments, rather than subtype — and their excuse when called out on this was the access modifier, as if changing "private" to "public" was hard.

  • Because its easy to be dogmatic, you don't need to think, consider the consequences or drawbacks, you just follow whatever the Supreme Leader told you to do.

    Its incredibly simple to just follow whatever someone is telling you to do, sometimes I wish I could live like this so I didn't have to fight with the people that do all the time.

  • On my Macbook Pro M2, having a browser window on one half of the screen, and my IDE on the other, with a file tree viewer pane and another pane for my LLM tools, a terminal pane at the bottom... I've never been more pressed for real estate for my actual code editing pane. Even 80 chars has me scrolling horizontally. Secondary monitors help but not when you frequently work away from your desk.

    • Coding on a laptop, even a name-drop-tier status shibboleth, is most of your problem. You write code on a 15" screen when you must for physical/location reasons. You shouldn't ever choose to do it or design your workflow around that constraint.

      A 42" 4k TV (got it for $2-300 at the start of the pandemic) gives me four 80-90 column text windows on a mid-tier chromebook. You could not pay me enough to do that same work on a laptop, even a $4k MBP.

      (But yes, even with lots of real estate 80 columns is still a net win)

      4 replies →

  • >> Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it.

    Well said!

  • The worst engineer I worked with was one who believed if he read it in a book, that opinion trumped anything else. Once he got so flustered he started yelling “come back to the discussion when you’ve read 13 books on this topic like me!” And it was something super mundane like how to organize config files or something.

    Made every engineering planning session a pain in the ass.

  • For many C#, Java and C++ engineers Uncle Bob is their savior and GoF are the apostles.

    Everything should follow SOLID and clean principles and be implemented using design patterns.

    • One of the best things I could do for myself is to go back in time and tell my younger self not to care so much about the "right" design pattern, or the perfectly DRY way to represent a piece of code. I was definitely my worst enemy for a long time, because I thought SOLID and the GoF design patterns were more important than writing code that is easy to understand without hopping across multiple files in case one day in the future your system needed to do something totally different with a new database or filesystem, etc. I started to look for places to add design patterns, rather than letting them develop naturally. Most of the software I built had no need for such heavy abstraction and complexity, and I've only ever had to switch database systems twice ever in 20 years, and the abstraction did not help reduce time or complexity all that much in the end. It definitely wasn't worth the up front planning compared to just rewriting the sections that directly handled the database.

      Maybe it's a right of passage to burn yourself badly enough over-architected solutions, where you finally start to understand you don't need all the complexity. Write the code for humans, as simple as you can. Keep large performance issues in mind, but only code around them when they become a problem or are extremely obvious. If anything, it's helped me to steer junior developers away from complex code, while encouraging them to try it out in their own time. Go ahead and figure things out on your own, but let's not do it on a shared codebase, please?

    • Which is unfortunate as there are no (legitimate) reasons to write C#, a multi-paradigm language, like this.

  • FWIW (not a lot), I do believe in a lot of these principles.

    For example, even with widescreen monitors, it is still useful to limit line length. Why? Because many people will have multiple source files side-by-side on one of those widescreen monitors, at which point it makes sense for them to not run on indefinitely.

    And of course, that is just a guideline, one that I break regularly. However, if it's a method with many args, I'll break the args onto their own lines.

    However, the overriding concern is that an organisation works to code towards a common style, whatever that may be, so that unfamiliar code is predictable and understandable.

  • > idiots who froth at the mouth

    That seems like an unnecessarily harsh way to refer to people.

    • Clean Code zealots are consistently some of the least likable, least productive, least pragmatic people I have ever worked with. I've had multiple clients where the whole team is threatening to quit unless the CC zealot is fired. And when they are fired guess what - bugs go down, shipped features go up, and meetings become productive. "Idiots who froth at the mouth" is an understatement IMO

      7 replies →

  • > It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.

    I love reading books for different perspectives.

    However, I’ve come to despise people who read books and then try to lord their book knowledge over others. These are the people who think that they have the upper hand in every situation because they read some books. They almost always assume you haven’t read them. If you point out that you have also read them, they switch the subject to another set of books they read because they don’t like when someone tries to undermine their book knowledge superiority.

    It’s even worse when the person reads books outside of their domain and tries to import that book knowledge into the workplace. The absolute worst manager I had was a guy who read a lot of pop-psychology books and then tried to psychoanalyze each of us according to those books.

  • > Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

    Honestly, no better indication of a very mediocre developer

  • The place I used to work at had a "architect" who would for any question to a decision he made would refer to whatever it was as a "best practice."

    Was often quite wrong and always infuriating.

  • Well I mean they wrote books about it and one guy had the audacity to call his opinion a “philosophy” even though it’s just an arbitrary opinion.

    Most of software is about assigning big words and over complicated nomenclature to concepts and these things masquerade as things with deeper meaning when in reality it’s just some made up opinion.

    Software design is an art. It is not engineering and it is not science. That’s why there’s so much made up bullshit. The irony is we use “art” to solve engineering problems in programming. It’s like ok we don’t actually know the most optimal way to program a solution here so we make up bs patterns and philosophies. But then let’s give this bs pattern some crazy over complicated name like Scientology or inversion of control and now everyone thinks it’s a formal and legitimate scientific concept.

    Well cats out of the bag for Scientology. Not yet for a lot of the bs in software. A “philosophy” is the biggest utter bullshit word for this stuff I’ve ever seen.

    • But there are typical practices we agree are good: using a VCS, writing tests, write comments when needed, separate different level of abstractions, etc. Right? This comes from years of common experience in software.

      Over time we get to find patterns, common issues and ways to fix them, etc. It doesn't have to be strict patterns but overall strategies.

      If we don't do that then it's just vibes right? Where's the engineering part?

      14 replies →

    • The issue is that programming is communication. Communication is indeed a form of art. Programming is not just giving instructions to machines, if that was the case we would be happily using binary code. So we have two dimensions, the first one is giving the binary instructions, but the other one is how to make these instructions understandable by humans, including ourselves.

      1 reply →

  • Square people are never going to agree with cool people. You can be cool and code some monstruosity or you can be square and say "we have to rebuild this entire project from scratch" everytime you see a long method.

You just need to work on one project built by someone that implemented Uncle Bob recommendations blindly when the books came out to know how much they are worth. There were some low hanging fruits to pick at the time regarding trying to be better at software engineering and he generated some text about them.

Full of terrible advices, he never wrote anything significant (in scope and notoriety) during his time as a software engineer like many other prominent authors at the beginning of the agile era. The success is only the result of a wave of junior devs searching for some sort of guidance, something that there is a never-ending need for.

Horrible recommendations that produced a lot of code that is a pain to work on with the abundant amount of indirection it has. Really painful guys.

  • > The success is only the result of a wave of junior devs searching for some sort of guidance, something that there is a never-ending need for.

    The issue is that, some never grown out of it. I interviewed with companies where they give the book to any new intern/junior. Then, during the hiring process, they don't even ask if you read it, they straight up ask questions about your knowledge of it. Like "What does Uncle Bob says about X in his book Clean Code?". And they constantly refers to it. Some people go as far as quoting it in PR.

    The worst part being that once they leave their company, since they don't know anything else, they'll apply the same stuff elsewhere & convert their new company to it.

  • (English tip: advice isn't a countable noun, so you don't pluralise it)

    I agree entirely. My encounters with Uncle Bob were as a junior developer receiving advice [no "s"] from other junior developers.

    And yes, I too find it suspicious how many mavens of the "Agile era" never really managed to ship anything.

    • It's important to note that Kent Beck is not one of those people, as he shipped the first unit testing library, as well as a bunch of ones in other languages later.

      Like, I personally prefer the bare assert style of testing (like pytest), but the junit style is basically everywhere now.

      8 replies →

    • Thanks for the correction, my opinion on the lack of credentials is that in the early days you just didn't need them to become popular, so little content that no one checked. I bit like it's happening nowadays with the anime profile pic twitter accounts acting like they invented AI, with zero code or achievements being shown.

  • It is very instructional to read the source code to FitNesse framework.

    https://github.com/unclebob/fitnesse

    You can see how all his ideas come together into a ball of hundreds of almost empty classes, and gems such as "catch Throwable".

  • A Philosophy of Software Design on the other hand is concise, excellent, and based on decades of teaching experience.

    • If I win the lottery, I don't want a building named after me I want to donate a copy of that book to every university computer science[1] student and make it required reading

      1: I'm aware it's a software engineering book, but since there are very few B.S. Software Engineering programs out there, You Know What I Mean ™

  • Clean code, design patterns etc. were also picked up by teachers, professors and course instructors.

    I think these paradigms and patterns often operate on the wrong layer of abstraction, while mostly ignoring the things that matter the most, like efficiency, error handling and debugging.

    But getting good at these things requires a lot more blood, sweat and tears, so there's no easily teachable recipe for that.

    • Clean Code is trying to operate at a layer far more important than efficiency: code maintenance. In the vast majority of cases computers are fast enough that you don't need to worry about efficiency. (part of this is any modern language provides all the common algorithms that are already highly optimized and easier to use than the writing them by hand and so the common places where you would want to worry are already efficient).

      Of course error handling and debugging are part of maintenance. However there is a lot more than those two that need to be considered as well.

      There is reason to hate Clean Code, but the worst adherents to the rules are still producing far better code than some of the impossible stuff that happened before. "Goto considered harmful" is one of the early steps in fixing all the bad things programmers used to do (and some still do), but you can follow the "rules" of goto considered harmful and still produce really bad code so we need more.

      5 replies →

There is an important case for comments that neither of them touched on. Sometimes you are dealing with bugs or counterintuitive processes beyond your control.

For example, I am writing some driver software for a USB device right now. It is so easy to get the device into a bad state, even when staying within the documented protocol. Every time I implement a workaround, or figure out exactly how the device expects a message to appear, I put in a comment to document it. Otherwise, when (inevitably) the code needs to have features added, or refactoring, I will completely forget why I wrote it that way.

The prime number example is a self-contained, deterministic algorithm. While I did find it far easier to parse with comments, I could still spend the time to understand it without them. In my USB device driver, no amount of review without comments would tell another person why I wrote the sequence of commands a certain way, or what timings are important.

The only way around that would be with stupid method names like `requestSerialNumberButDontCallThisAfterSettingDisplayData` or `sendDisplayDataButDontCallTwiceWithin100Ms`.

  • > The only way around that would be with stupid method names

    Yep. Method names make terrible comments. No spaces, hard to visually parse, and that's before acronyms and ambiguity enter the conversation.

    As the person who often writes borderline-essay-length comment blocks explaining particularly spooky behaviors or things to keep in mind when dealing with a piece of counterintuitive/sensitive/scary code, my reason for mega-commenting is even simpler: all the stuff I put in comments should absolutely instead live in adjacent documentation (or ADRs, troubleshooting logs, runbooks, etc). When I put it in those places, people do not read it, and then they do the wrong things with the code. When I put it in comments, they read it, as evidenced by the rate of "that bug caused by updating the scary code in the wrong way happened again"-type events dropping to zero. It's easier to fix comment blocks than it is to fix engineers.

    • > Yep. Method names make terrible comments. No spaces, hard to visually parse, and that's before acronyms and ambiguity enter the conversation.

      Which is why snake_case or kebab-case (if the language allows it) is much better than PascalCase or camelCase.

      Even worse when camelCase enters into JSON because people want to automate the serde but are too lazy to make the actual interface (the JSON Schema) easy to read and debug.

  • > I will completely forget why I wrote it that way.

    This is the main reason for comments. The code can never tell you "why".

    Code is inherently about "what" and "how". The "why" must be expressed in prose.

  • > For example, I am writing some driver software for a USB device right now. It is so easy to get the device into a bad state, even when staying within the documented protocol. Every time I implement a workaround, or figure out exactly how the device expects a message to appear, I put in a comment to document it. Otherwise, when (inevitably) the code needs to have features added, or refactoring, I will completely forget why I wrote it that way.

    I believe in general there is a case for this (your case sounds like a perfect candidate). The implementation of Dtrace is another example[0] full of good description, including ASCII diagrams (aside: a case for knowing a bit of Emacs (though I'm sure vim has diagramming too, which I would know if I pulled myself out of nvi long enough to find out)).

    [0] https://github.com/opendtrace/opendtrace/blob/master/lib/lib...

  • While I am not a Uncle Bob-style "no comments"er I do love a ridiculous method name. I pay very close attention to that method and the context in which it is called because, well, it must be doing something very weird to deserve a name length like that.

    • That’s exactly why you should save that length only for a method that’s indeed doing something weird. If every method is long, the codebase turns into noise. (IOW I agree)

      6 replies →

    • There are a few Haskell functions with names like reallyUnsafePtrEquality# or accursedUnutterablePerformIO, and you know something interesting is going on :P

  • Sounds like you should instead be making these invalid states unrepresentable by encoding them in types and/or adding assertions. Especially if you're exposing them as interfaces, as your example function names would imply.

    • They're invalid states inside the USB device, not inside the driver code. So nothing you do to the driver code can make them unrepresentable. The best you can do is avoid frobbing the device in the problematic ways.

  • I don’t see anything wrong with those names. A bit hard to parse but the name moves with the function call while a comment does not.

    It’s annoying to look at but when you actually read the function you know what it does. A more elegantly named function is less annoying to read but less informative and doesn’t provide critical information.

    The name just looks ugly. But it’s like people have this ocd need to make things elegant when elegance is actually detrimental to the user. Can you actually give a legitimate reason why a method name like that is stupid other then its “hard to parse”. Like another user said… use snake case if you want to make it easier.

  • Encoding temporal dependencies (or exclusions) between methods is hard. You can get partially there by using something like a typestate pattern (common in rust).

  • Have you thought about distilling your hard-earned information about the device's behavior into a simulator for the device you could test your code against?

I strongly recommend "A Philosophy of Software Design". It basically boils down to measuring the quality of an abstraction by the ratio of the complexity it contains vs the complexity of the interface. Or at least, that's the rule of thumb I came away with, and it's incredible how far that heuristic takes you. I'm constantly thinking about my software design in these terms now, and it's hugely helpful.

I didn't feel like my code became better or easier to maintain, after reading other programming advice books, including "Clean Code".

A distant second recommendation is Programming Pearls, which had some gems in it.

  • Implicitly, IIRC, the optimal ratio is 5-20:1. Your interface must cover 5-20 cases for it have value. Any fewer, the additional abstraction is unneeded complexity. Any more, and your abstraction is likely too broad to be useful/understandable. The example he gives specifically was considering the number of subclasses in a hierarchy.

    It’s like a secret unlock code for domain modeling. Or deciding how long functions should be (5-20 lines, with exceptions).

    I agree, hugely usual principle.

    • Maybe some examples would clarify your intent, because all the candidate interpretations I can think of are absurd.

      The sin() function in the C standard library covers 2⁶⁴ cases, because it takes one argument which is, on most platforms, 64 bits. Are you suggesting that it should be separated into 2⁶⁰ separate functions?

      If you're saying you should pass in boolean and enum parameters to tell a subroutine or class which of your 5–20 use cases the caller needs? I couldn't disagree more. Make them separate subroutines or classes.

      If you have 5–20 lines of code in a subroutine, but no conditionals or possibly-zero-iteration loops, those lines of code are all the same case. The subroutine doesn't run some of them in some cases and others in other cases.

      9 replies →

I was around before the clean code movement, and like all software movements, it was a reaction to real problems in the software industry. Massive procedural functions with deeply nested conditionals, no structure, global variables, no testing at all. That was all the norm.

Clean Code pushed things in a better direction, but it over-corrected. In many ways APOSD (published in 2018) is a correction against the excesses of Clean Code (published in 2008).

Will people swing too far back, to giant methods, deeply nested conditionals, etc? I don't know. But probably.

  • I believe that there is a genuine physiological effect that makes it a good idea to have the area of code that you need to think about fit entirely on one screen, without scrolling. There is probably an upper limit to the screen height where that limit is useful: I would believe a 100-line function to be above it and a 24-line function to be safely below it, but I wouldn't want to hazard a guess in the middle.

    It's all to do with how your brain processes what it's seeing, and the planning processes involved in getting to the next bit of information it needs. If that information is off-screen, then the mechanisms for stashing the current state and planning to move your hands in whatever way necessary to bring it onscreen will kick in, and that's a sort of disfluency.

    Similarly with tokens too far from whatever you're currently focused on. There's likely to be a region (or possibly a number of tokens) around your current focal point within which your brain can accurately task your eyes to scan, and outside that, there's a seeking disfluency.

    I think this is why you get weird edge cases like k and j, where they pride themselves on having All The Code in one 80x24 buffer, and it actually works for them despite breaking all the rules about code legibility.

    • The term you're looking for is cognitive load. It's a qualitative term used to represent the amount of information a person has to keep in working memory while working on a task.

      1 reply →

    • I agree. I once attempted this on a javascript project (a personal project, not at work), after reading about APL/J/K people and their philosophy. My constraint was: I should never have to scroll. I also aimed to have as few files as possible.

      The result was surprisingly pleasant, and it changed how I feel about this sort of thing. I think the Clean Code approach makes a lot of sense when you are working on a big project that contains lots of code that other people wrote, and you rely on IDE features like jumping to definition, etc. But if you can write code that fits on one screen without scrolling, something special happens. It's like all the negative aspects of terse code suddenly vanish and you get something way simpler and overall easier to work with and understand. But you really have to work to get it to that point. A middle ground (terse code but still spread out over lots of files, lots of scrolling) would be the worst of both worlds.

  • I think learning extremes can be useful, just don't take any one paradigm as gospel. Practicing Clean Code forces you to think in a special way, and when you've tried it, you start to get a feeling for where you should draw the line. Doing CC makes you a better programmer, but you have to figure out yourself where the tradeoffs are.

    Other examples are TDD. Forcing myself to write tests for everything for a period has made all my code since better, even though I don't practice TDD now.

    • I feel the same way, the benefit of testing is that it forces you to write code that can be tested, which tends to make code better just in general.

  • If you don't nest that much, big functions aren't so bad.

    You can just scroll down and see what happens in a linear fashion.

    • This is the Linux kernel approach, and is a big part of why the kernel uses 8-space tabs. It's generally very effective for understanding what is happening. I'm happy with a 200-line straight-line-with-error-handling function, while a monstrosity of 10 20-line functions that all do if-else is quite a bit harder to read. The latter is "clean code."

  • As Ben Franklin wrote "the best physician that knows the worthlessness of the most medicines."

  • I don't know, I think the kind of person that comments on hacker news is not the average sort of programmer. I mentioned Clean Code to a more experienced colleague today and he had no idea what I was talking about, and when I mentioned the ideas to him he laughed at them. So I don't think there's some sort of pendulum swinging the other way and people are going to start writing massive functions. You'll probably just see small subcultures come up with some new idea that's obnoxious.

    There have been some pendulum swinging around things like monoliths/microservices, but even then the amount of people that those things effected is actually much less than the larger community of programmers as a whole.

I have worked with a couple of people over the years who instead of breaking functions out when something would say make sense to be reused or made some sort of logical sense as a unit, instead seemingly just bundle lines whose only real relationship was that they happened to be near each other when they decided to "refactor".

Having read Clean Code back in college as it was assigned reading, it was absolutely the vibe I got from Uncle Bob generally. See any number of lines at the same indentation level, select them, extract method, name it vaguely for some part of what it does, repeat.

I honestly think that it comes from this type of school of thought that a function should be X lines rather than a function achieving a function. Thinking about this now, it's sort of the difference between "subroutines" and "functions".

Working on their code, I thank god for modern IDEs ability to inline. I often go through and restructure the code just to understand the full scope of what it's doing, before restoring what I can of the original to make my changes as minimal as possible.

  • The warning sign I see when methods are split too much is that the method boundaries start to get messy: methods take too many arguments, or state is saved into confusingly named class members, or you end up returning some struct containing a grab bag of unrelated values.

  • It's been a long time since I've read the book but I took it to be less 'cut the function at X lines' and more 'long functions tend to be doing too many things at a time'. I think if you're able to give a good name to some sub section of a function, it's a good sign that it can be extracted out. At that point, you shouldnt need to look at the functions implementation unless its the specific function that you want to modify, because its name and arguments should be enough to know what it does and that you don't need to touch it.

    Are we talking about the same thing and you'd still find that hard to understand?

I've enjoyed both books but Uncle Bob is something you grow out of. He was a bit of a cult figure at the time. Trying to actually follow the guidelines in Clean Code taught me a lot about "over-decomposition" and, ultimately, how not to write code. It reminds me it's possible to take aesthetics so far the results become ugly. Fussing over a proliferation of small functions that do only one thing is a kind of madness. Each individual function eventually does zero things. You are left sifting through the ashes of your program wondering "Where did I go wrong?"

On the meta level, these exchanges, while mildly interesting, have the vibe of debating how many angels can dance on the head of a pin. I'm reminded of the old saying: "Writing about music is like dancing about architecture." If you want to write good code, read good code. Develop a taste that makes sense to you. I don't think I'll ever read a book about code composition again.

  • > If you want to write good code, read good code.

    As a junior in the field working at a small company, I often rely on this community for guidance, and this seems the most sound advice on this thread.

    • You need to know what is good code. Opinions may vary a lot between programmers, even senior ones. The Clean Code cult would tell you to find good code there but that is the most poisonous programming book I have read.

      12 replies →

    • Just a point here, "good code" is sometimes subjective, and depends on understanding the context of what the code is doing. What you think is good code might be overly verbose to another person, or overly terse, or have poorly named variables, or not have sufficiently conservative guard clauses, or throw insufficiently-granular exceptions. What you think is confusing code might lack context for where it is in the stack and what problems it needs to solve at that layer of the stack.

      You can also read critical reviews of someone else's work, compare them with the work in question, and see if the critic's punches land or if they look like misses.

      https://qntm.org/clean

      ^ This, I thought, was a good takedown of Clean Code, highlighting some cases where Bob Martin made too many overly thin functions that lacked meat and made it hard for the reader to gain context for what the function was trying to do. [1]

      I would also say, reading "the same code" in different programming languages might get you a feel for if you prefer code to be more verbose or more terse, more explicit or more implicit. e.g. https://rosettacode.org/wiki/Globally_replace_text_in_severa...

      [1] sometimes derisively referred to as "lasagna code" or "baklava code" -- https://www.johndcook.com/blog/2009/07/27/baklav-code/

    • Agreed.

      Some other heuristics:

      * Every if statement is a chance of a bug because the code has two or more paths to follow. Keep the choice making at the business/requirements level of the code, not hidden inside lower level decomposition.

      * A switch statement that is not exhaustive (ie covers all possible values) is a change of a bug, especially if there is no default case.

      Modern languages with better type systems make the second point less relevant because they require exhaustive pattern matching.

      2 replies →

  • > You are left sifting through the ashes of your program wondering "Where did I go wrong?"

    Brilliantly phrased metaphor, thank you.

  • >Each individual function eventually does zero things.

    Lambda calculus, basically :)

  • it has been years since I read the book, but I'm surprised that there's so much hatred for it here. From memory it seemed like fairly harmless things like give things good names, try to make the code readable, dont comment what the code does but why, use consistent formatting, avoid duplication.

    Other than people going overboard with empty classes and inheritance Ive not really seen a problem of people breaking down functions too far.

    Which parts are important to grow out of?

Bob's comments on... commenting.. are so bizarre that I can't help but think that he just refuses to concede the point rather than admit he might have been wrong about it. Like, the paranoia around incorrect/stale comments is fairly absurd, I've been coding for 20 years across many code bases, and I can't even recall a time when I've been significantly mislead by a comment which caused a significant waste of time. However, the amount of time I've wasted on unclear code that has zero comments is absolutely staggering. However, what really sealed the weirdness to me was his argument that this was somehow a good comment:

                                                                    X
                                                        1111111111111111111111111
           1111122222333334444455555666667777788888999990000011111222223333344444
       35791357913579135791357913579135791357913579135791357913579135791357913579
       !!! !! !! !  !!  ! !! !  !  !!  ! !!  ! !  !   ! !! !! !
     3 |||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-
     5 |||||||||||-||||-||||-||||-||||-||||-||||-||||-||||-||||-||||-
     7 |||||||||||||||||||||||-||||||-||||||-||||||-||||||-||||||-||||||-
    11 |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||-||||||||||-
    13 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
    ...
    113||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

That's verbatim, I'm not unfairly clipping context away from it. Like, what the hell is that supposed to tell someone?! Wouldn't it just be easier to drop a link to the algorithm, or briefly explain the algorithm, or just give a name of the algorithm so someone can look it up? Instead he just talks about taking a bike ride to understand it and making a weird picture. He also has bizarre arguments that if something can't be expressed in a programming language, it's the fault of the programming language (what?!) and that code is more understandable than English. I really find it hard to believe that he thinks these are actually good arguments, I just get the impression he does not want to concede that he was wrong about this.

  • > so bizarre ... what the hell is that supposed to tell someone?!

    I liked this bizarre comment. It was like seeing a physical geometry proof for trig. Or like thinking about primes while riding a bike for an hour.

    In 10 - 15 seconds this comment offered a flash of appliable intuition into primes I'd not appreciated before.

    Granted, the bulk of that was first gathering that the top 3 rows of digits were a series turned sideways (printing them rotated would have made that instant). Joys of plaintext.

    But then the pattern popped, and the code, including the optimization, made sense, but now from the "grok", with MTOWTDI.

    Neither their commentary nor their function names and comments, caused the grok. I could "accept" the assertions, but to me neither naming nor comments were intuitively self-evident the way the diagram was.

    Both of them commented on having to dwell on what the code was doing to consider refactoring. Once this flash happens, one no longer needs reference code at all, it's just another property of primes.

  • > I'm not unfairly clipping context away from it

    Yes you are, you didn't attach the surrounding code where this comment was found. That comment would make a lot of more sense even just with the function name.

  • Lol, this reminded me of those engravings we put on spacecraft. Like, if we had to communicate the algorithm to an alien civilization then sure, this might be the best way to do it!

    https://en.wikipedia.org/wiki/Pioneer_plaque

    • The juxtaposition of how the original comment starts, then the appearance of the verbatim “good comment”, then the spacecraft engravings made me laugh tears.

      Especially the buildup from how bizarre the understanding of UB of comments is to actually seeing one “in the wild”.

  • I'm pretty much in agreement with you on this, however I'm always aware of the possibility of comments being bugs. If code gets moved around, there is the very real possibility that the comment is now attached to the wrong method or line of code.

    I now comment on method or function basis, describing what the method does. The how should be evident in the body itself.

    He doesn't seem to often concede to being wrong.

    • The how should be, but the why might not be. I think comments should explain choices, especially ones that I am likely to question when returning to the code later.

  • Actually, I presented that figure as an example of just how difficult that problem was to understand, and how (on a bike ride) I was finally able to visualize what was going on. The ascii image was presented a bit tongue in cheek.

    However, if you study that image, you might come to the same insight that I (on my bike ride) came to; and that the english comments never helped me with.

I am biased ( a former coworker was an Uncle Bob fan, and was bent on doing everything by the book, with layers of abstraction, patterns, hexagonal architecture, lots of unit tests, no cutting corners, even as we did not know what exactly we want to build and needed an MVP ASAP) but I'll just say this: Ousterhout wrote TCL - widely considered one of the best C codebases - besides being a professor at Standford and having other software achievements under their belt, while Robert Martin is more like a software technology evangelist. The former good at actual deliverables the latter good at selling.

Also Ousterhout's book on design is very easy to read and I guess I liked it because I mostly just nodded in approval while reading and there were very few things that made me stop.

  • Biased against the approach of your former coworker and thus the "Clean Code" way? I assume it did not work out well, because you needed to move fast to build an MVP before trying to do it right?

    • yes, biased against knowing 'the best way do write software' and applying it regardless of what the current requirements and constraints are. And arguing for their position by sending people links to Uncle Bob videos for 'enlightenment'.

    • Following Clean Code is not the right way to develop software in any stage of the project. It is a few opinions of someone who has not actually written any code of substance. In addition to Clean Code being a bad approach, it can also be a very slow process.

  • Let's not forget that Uncle Bob, by the time of writing "Clean Code" had 4 decades coding experience.

    • Do not make the mistake of the craftsman who claims to have 20 years of experience, but in truth only has 1 year of experience repeated 20 times.

    • My middle school English teacher had 4 decades of experience writing. What she wrote was lesson plans. That doesn't make her Stephen King.

    • Kenneth Copeland has been a pastor for 50 years and his theology and pastoral practice is still terrible. Years of experience is not a useful metric when you could instead look at results.

    • Software is results driven, there's no value in simply warming a seat for X YOE, talking about code instead of actually executing.

    • Are there large code bases that he has written that we know anything about?

I find the lack of discussion of type systems really surprising in these sorts of discussions and books. Effective use of type systems is a killer factor for me for creating clean, safe, readable and maintainable software designs.

When used correctly, strong static type checking make certain kinds of bugs impossible, spare you from writing many kinds of tedious tests that often get in the way of refactoring, serve as documentation, and make refactoring/maintenance an order of magnitude faster and safer. Even when a type checker isn't available, avoiding dynamic behaviour is very often the safer way to go so learning how to think in that way is still beneficial.

Most of these minor topics like how big a function should be, what to name your variables, or even if you write tests before/after coding... it's like trying to come up with general rules on how to write essays, creating graphic designs, or how to cook. "It depends" on the context and juggling different priorities each time. It's the kind of thing you only learn properly through practice (https://en.wikipedia.org/wiki/Tacit_knowledge), so there's only so much to gain in reading about it or discussing it after you've defined the no-brainer things to always do and always avoid.

  • Because the pendulum of typing hadn't swung back to static being in vogue when the Philosophy of Software Design came out. At the time you had mostly the Scala & Haskell people standing in a corner screaming until they (well, we as I was one of them) were blue in the face about reducing "certain types of bugs", and making impossible states impossible.

    Since then, everyone and their brother is on the static typing train. And from that lens you're right. It seems like an omission. Give it another 10 years and people will probably think the opposite.

  • That was exactly the approach taken by Prof. Ousterhout in setting up the class which lead to this book --- rather than just having students turn in working code for a grade, the code is reviewed with the student and the student then works to make it better --- in turn, the 2nd edition of the book was informed by the experience of teaching the class and the author actually changed his position based on the experience gained.

    • Why is that convincing though? Students aren't experienced coders, aren't working in large teams, and student assignments aren't like long-term large commercial projects.

      If you mean the additions here https://web.stanford.edu/~ouster/cgi-bin/book.php, I read these and it still sounds like general rules of thumb you'll only really learn and understand by practicing a lot e.g. "In my experience, the sweet spot is to implement new modules in a somewhat general-purpose fashion" "Having good taste is an important part of being a good software designer".

      5 replies →

  • Type systems and type-based coding patterns are very hip right now, but they weren't 6 years ago. That is partly because the type systems in the main languages in use 6 years ago were hack jobs (to put it politely).

    I do expect the pendulum to swing against type systems at some point soon for the same reasons it swung against OOP: Too much heavy lifting done by something that's hidden from the programmer, encouraging people to be "too clever," etc. Like OOP, algebraic types are a tool that have to be used well, and the current users are people who really like type systems and do use them well. It's only a matter of time before the tool gets into the hands of the average programmer, and then we will see how terribly a great type system can hurt you.

Uncle Bob's insistence that functions should be 2-4 lines long is baffling to me. I don't understand how he can be taken seriously. Is there a single application in the entire world with substantial functionality that conforms to this rule?

  • I've seen this in what I call 'lasagna code' - multiple thin layers that seem to do nothing (or almost nothing) but each one is an implementation of some abstruse interface that exists in the mind of the original developer.

    Eventually, your code has to do something. Get that thing in one place where you can look at it in its whole.

  • John Carmack would disagree with Uncle Bob and John Carmack actually programs.

    My own experience is that with an IDE that can collapse a new scope in the middle of a function, you can make large functions that accomplish a lot and are very clear by writing a comment and starting a new scope.

    If something is going to be called multiple times a new function makes sense, but this idea that anything that can eventually return a single value needs to be it's own function is a giant pain that creates more problems than it solves.

    Just makeing a new scope in the middle of the functions lets you use all the variables in the outer scope, do transformations without introducing new variables and ultimately "return" a new variable to the outer scope.

    I've never understood why polluting namespaces with dozens or hundreds of names (most of which may not be very descriptive since naming a hundred small things is already going to be confusing) is seen as a good idea. You look at a list and you have no idea what is important and what was being shoved in there to satisfy some do nothing public speaker's arbitrary rules.

    • The problem with collapsing is that you need to know a priori which sub-scopes are independent and hence collapsible and which aren’t. Meaning, you have to analyze the unfamiliar code first in order to know which sub-scopes you might want to collapse. And, given an already-collapsed sub-scope, due to it being collapsed you can’t see if it reads or mutates some local variable. The benefit of extracted functions is that you know that they are independent of the implementation details of the caller (i.e. can’t possibly depend on or modify local variables of the caller other the ones passed as arguments).

      Too many arguments can become a problem. Nested functions can help here, because they allow you to move their implementation “out of the way” while still having access to shared variables. And sometimes a collection of parameters can sensibly become its own class.

      IDE affordances are fine, but I’m opposed to requiring reliance on them for reading and understanding code, as opposed to writing.

      1 reply →

  • Too often I see functions that are shells that reshuffle the arguments and pass them to another function, which also reshuffles the arguments and forwards them to another, and on and on. One was 11 layers deep.

    • I have seen such code too - with just S,K and I combinators. It wasn't readable.

    • And a lot of people doesn't understand how dangerous shuffling parameters is, especially in languages that do not have named parameters...

      1 reply →

  • Yes. This works but only if the functions are pure and using pure function composition.

    Uncle bob doesn’t mention this.

       createspecialString(y) =
           Capitalizefirstletter .
           MakealllowerCase .
           AddNumberSuffix .
           removeLetterA .
           removeLetterB .
           ConcatwithWord(x)
    
       CapitalizeFirstLetter(a) = a[0].upper() + a[1:]
       MakeAllLowercase(a) = map(a, (t) => t.lower())
       Addnumbersuffix(a) a + 3.toString()
       RemoveLetterA(t) = filter(t, (s) => s.lower() == “a”)
       RemoveLetterB(t) = filter(t, (s) => s.lower() == “b”)
       ConcatenateWithWord(x) = (y) => y + x
    
       
    

    There see? It’s mostly doable in pure functional composition where the dot represents function composition. I program like this all the time. No way anyone can pull this off while mutating state and instantiating objects.

       F . P = (x) => F(P(x))
    

    Forgive some inconsistent formatting and naming im typing this on my phone.

    People who complain about this style tend to be unfamiliar with it. If you had knowledge about procedural coding styles and a function composition approach like this then usually this style is easier as the high level function literally reads like English. You don’t need to even look at the definitions you already know what this complicated string formatting function does.

    No comments needed. And neither author tells you about this super modular approach. They don’t mention the critical thing in that this style requires functions to be pure.

    Thus to get most of your code following this extremely modular and readable approach… much of your code must be minimizing IO and state changes and segregating it away as much as possible.

    The Haskell type system, the IO monad is pushing programmers in this direction.

    Again neither author talks about this.

    • Based on his blog, Martin has been getting into Clojure in recent years. I was kind of hoping that the experience with a functional lisp would shift some of opinions that he previously stood by in Clean Code, but based on this discussion, it doesn't seem like it.

  • There are. A lot of Java code bases look like this.

    It is all as bad as you imagine. Functionality is spread out all of the place so it is very difficult to reason about how it all hangs together.

    • I once fully spelunked such a Java call stack to convert some code to golang. It was amazing, there were like 5 layers of indirection over some code that actually did what I want, but I had to fully trace it keeping arguments from the full call stack in mind to figure this out, because several of the layers of indirection had the potential of doing substantially more work with much more complex dependency graphs. I ended up with a single go file with two functions (one that reproduced the actually interesting Java code and one that called it the way it would have been called across all the layers of indirection. It was less than 100 lines and _much_ easier to understand.

    • It's always fun stepping through 412 stack frames that are all 2-line long methods to figure out where the thing you're interested in actually happened.

  • Yes, I've worked on a couple of codebases like that. It's glorious, you break everything down little by little and every step makes sense and can be tested individually. Best jobs I've had.

    • But are those steps actually doing anything that can be tested? My experience with these sorts of codebases was always that most of the functions aren't doing much other than calling other functions, and therefore testing those functions ends up either with testing exactly the same behaviour in several places, or mocking so heavily as to make the test pointless.

      Or worse, I've seen people break functions apart in such a way that you now need to maintain some sort of class-level state between the function calls in order to get the correct behaviour. This is almost impossible to meaningfully test because of the complex possible states and orders between those states - you might correctly test individual cases, but you'll never cover all possible behaviours with that sort of system.

      1 reply →

    • These large compound statements look nice if they are perfect, but when you make giant expressions without intermediate variables, it is much more difficult to test.

      When you have small expressions that have incremental results stored in variables, you can see the result in a debugger so you can see each stage.

  • Like with a lot of his approach, its great for teaching people better coding skills in an educational setting, but doesn't make as much sense in the real world.

  • It is a bit weird.

    However, a friend of mine was a professional Smalltalk programmer. He claims that his median line count of methods, over his 17 year career, was 4.

    It is harder to do in other languages--it seems that C would be on the order of 10.

    Clearly it is a rule that can lead to complexity of too many methods, compromising whatever gain smaller methods give you.

    • > median line count of methods

      Auto-generate getters and setters for every instance variable and that will drag the average down. (Maybe a lot of those getters and setters should not have existed.)

      2 replies →

  • This is something that's easier to read than it is to understand. A lot of languages force you to do quite a lot in a function and becoming blind to bloat is way to easy to do. (C++/Go/Java/etc yep).

    He did an example in the article of:

    void concurrentOperation() { lock() criticalSection(); unlock() }

    So if you subsitute criticalSection with a lot of operations, such as open file, read lines, find something, close file. I think you have a better representation of an over bloated function.

    Scala has the langauge support to show what this could look like.

    What you're doing in that method is starting a critical section, doing something, and then ending a critical section. It's a good suggestion to break that with:

    def criticalSection(f: () => Unit) { lock() f() unlock() }

    How you have a single method that does one thing and is easy to understand. Also it's reusable.

    The original code would be used as:

    criticalSection { _ => doSomething() }

    That replacement is now longer dependent on locking. Locking is layered in.

This was a fun read. I read APoSD for the first time a couple of months ago and found myself nodding enthusiastically as I read. I have a few quibbles, of course, but overall it matches my experience in how to write software that is correct, maintainable, extensible, and understandable.

I've never read CC, but I've read some of the take downs[1]. I was worried that the take downs were attacking a strawman, but no, Uncle Bob believes this stuff, including that comments are evil and you just need to read all the code and keep it in your head.

Even if that were true, the code I write is better for having written the comments, especially interface comments, because the writing helps my thinking. Moreover, it helps my code reviewers—without written interfaces. If all you have is the code and not a description of what the code is supposed to do, how can you know if it is correct? I think most code reviewers are verifying the code against what they infer the interface to be. It helps us both to just be explicit.

[1]: https://qntm.org/clean

One of my beefs with Clean Code is its name.

There is no objective measure of code cleanliness. So if "clean code" is your goal, then you have no meaningful criteria to evaluate alternatives. (Including those pitched by Bob Martin.)

It gets worse, though. There's a subconscious element that causes even more trouble. It's obviously a good thing to write "clean code", right? (Who's going to argue otherwise?) And to do otherwise would be a moral failing.

The foundation on which "Uncle Bob" tries to build is rotten from the get-go. But it's a perfect recipe for dogmatism.

  • Honestly that kind of makes the word "clean" seem like a good fit to me. I can't say that measuring the cleanliness of my house is objective.

It's striking to me how out of touch Martin seems to be with the realities of software engineering in this transcript. Stylistic refactors that induce performance regressions, extremely long and tortured method names for three-line methods, near-total animus towards comments ... regardless of who is right/wrong about what, those takes seem like sophomoric extremism at its worst, not reasoned pragmatism that can be applied to software development in the large.

When I encounter an Uncle Bob devotee, I'm nearly always disappointed with the sheer rigidity of their approach: everything must be reduced thus, into these kind of pieces, because it is Objectively Better software design, period. Sure, standard default approaches and best practices are important things to keep in mind, but the amount of dogma displayed by folks who like Martin is really shocking and concerning.

I worry that his approach allows a certain kind of programmer to focus on like ... aesthetic, dogmatic uniformity (and the associated unproductivity of making primarily aesthetically-motivated, dogmatic changes rather than enhancements, bugfixes, or things that other coders on a project agree improves maintainability) instead of increasing their skills and familiarity with their craft.

Maintainability/appropriate factoring are subjective qualities that depend a lot on the project, the other programmers on it, and the expectations around how software engineering is done in that environment.

Pretending that's not true--that a uniform "one clean code style to rule them all" is a viable approach--does everyone involved a disservice. Seasoned engineers trying to corral complexity, new engineers in search of direction and rigor, customers waiting for engineering to ship a feature, business stakeholders confused as to why three sprints have gone by with "refactor into smaller methods" being the only deliverable--everyone.

  • The Uncle Bob thing is something I'm experiencing right now.

    I hired a friend who was a huge Uncle Bob mark, and he kept trying to flex his knowledge during interviews with other people in the company. I didn't really think much of it and told the other interviewer that it was just his personal quirk and not to worry much.

    I had him work with some junior devs on a project while I took care of something more urgent. After finishing it, I went over to take a look at how it was going on his end. I was horrified at the unnecessary use of indirection; 4 or 5 levels in order to do something simple (like a database call). Worse, he had juniors build entire classes as an interface with a database class that was "wrong".

    No practical work was done, and I've spent the past 4 weeks building the real project, while tossing out the unnecessary junk.

    I liked Clean Code when I read it, but I always assumed a lot of it was meant for a specific language at a specific time. If you are using it verbatim for a Python project in 2025, why?

  • > I worry that his approach allows a certain kind of programmer to focus on like ... aesthetic, dogmatic uniformity (and the associated unproductivity of making primarily aesthetically-motivated, dogmatic changes rather than enhancements, bugfixes, or things that other coders on a project agree improves maintainability) instead of increasing their skills and familiarity with their craft.

    Funny, I find the opposite. In my experience people that are willing to take a "dogmatic" position on code style are those who are able to actually get on with implementing features and bugfixes. It's the ones who think there's a time and place for everything and you need to re-litigate the same debates on every PR who tie themselves in knots getting nothing done.

    Do I agree with absolutely everything Martin writes? In principle, no. But I'd far rather work on a codebase and team that agrees to follow his standards (or any similar set of equally rigid standards, as long as they weren't insane) than one that doesn't.

  • I'm not familiar with the Clean Code book etc; my introduction is the article. UB seems to be advocating consistently for patterns that are not my cup of tea! For example: Functions sometimes make sense as 2-3 lines. Often 5-20. Less often, but not rarely, much more than that!

    I'm also a fan of detailed doc comments on every module and function, and many fields/variants as well. And, anything that needs special note, is unintuitive, denotes units or a source etc.

    • Function length also depends on language. Every line of one language requires three line in another if the former has implicit error handling and the latter explicit. But I find the cognitive load of the two to be similar.

      I am also okay with 1000 line functions where appropriate. Making me jump around the code instead of reading one line at a time, in a straight line? No thanks!

  • > It's striking to me how out of touch Martin seems to be with the realities of software engineering in this transcript

    It was always like that. And Fowler the same thing with his criticism of anemic domain model. But software-engineering is no exceptions to having a mass of people believing someone without thinking by themselves.

    • > It was always like that. And Fowler the same thing with his criticism of anemic domain model.

      What leads you to disagree with the fact that anemic domain models are an anti-pattern?

      https://martinfowler.com/bliki/AnemicDomainModel.html

      I think it's obvious that his critique makes sense if you actually take a moment to try learn and understand what he says and where he comes from. Take a moment to understand what case he makes: it's not object-oriented programming. That's it.

      See,in a anemic domain model, instead of objects you have DTOs that are fed into functions. That violates basic tenners of OO programming. It's either straight up procedural programming or, if you squint hard enough, functional programming. If you focus on OO as a goal, it's clearly an anti-pattern.

      His main argument is summarized in the following sentence:

      > In essence the problem with anemic domain models is that they incur all of the costs of a domain model, without yielding any of the benefits.

      Do you actually argue against it?

      Listen, people like Fowler and Uncle Bob advocate for specific styles. This means they have to adopt a rethoric style which focuses on stressing the virtues of a style and underlining the problems solved by the style and created by not following the style. That's perfectly fine. It's also fine if you don't follow something with a religious fervor. If you have a different taste, does it mean anyone who disagrees with you is wrong?

      What's not cool is criticizing someone out of ignorance and laziness, and talking down on someone or something just because you feel that's how your personal taste is valued.

      34 replies →

  • I haven't seen a lot of evidence that Martin really has the coding chops to speak as authoritatively as he does. I think when you become famous for giving advice or being an "expert", it can be difficult to humble yourself enough to learn new things. I know personally I've said a lot of dumb things about coding in the past; luckily none of those things were codified into a "classic" book.

    What strikes me about the advice in Clean Code is that the ideas are, at best, generally unproven (IE just Martin's opinion), and at worst, justify bad habits. Saying "I don't need to comment my code, my code speaks for itself" is alluring, but rarely true (and the best function names can't tell you WHY a function/module is the way it is.) Chopping up functions and moving things around looks and feels like work, except nothing gets done, and frankly often strikes me as being the coding equivalent of fidget spinners (although at least fidget spinners dont screw up your history). Whenever Martin is challenged on these things he just says to use "good judgement", but the code and advice is supposed to demonstrate good judgement and mostly it does not.

    Personally I wish people would just forget about Clean Code. You're better off avoiding it or treating it as an example of things not to do.

    • I watched some talks he gave 15 years ago and what struck me was that he would use analogies to things like physics that were just objectively incorrect. He was confidently talking about a subject he clearly didn't understand at even an undergraduate level.

      Then for the rest of the talk he would speak just as confidently about coding. Why would I believe anything he has to say when his confidence is clearly not correlated to how well he understand the material?

    • > I haven't seen a lot of evidence that Martin really has the coding chops to speak as authoritatively as he does

      From what I can deduce, his major coding work was long in the past, and maybe in C++.

  • I read Clean Code when I started out my career and I think it was helpful for a time when I worked on a small team and we didn't really have any standards or care about maintainability but were getting to the point where it started mattering.

    Sure, dogmatism is never perfect, but when you have nothing, a dogmatic teacher can put you in a good place to start from. I admired that he stuck to his guns and proved that the rules he laid out in clean code worked to make code more readable in lots of situations.

    I don't know anything about him as a person. I never read his other books, but I got a lot out of that book. You can get a lot out of something without becoming a devotee to it.

    EDIT: I think even UB will agree with me that his dogmatism was meant as an attitude, something strong to hit back against a strong lack of rigidity or care about readable code, vs a literal prescription that must be followed. See his comment here:

    > Back in 2008 my concern was breaking the habit of the very large functions that were common in those early days of the web. I have been more balanced in the 2d ed.

    And maybe I was lucky, but my coding life lined up pretty neatly with the time I read Clean Code. It was an aha moment for me and many others. For people who had already read about writing readable code, I'm sure this book didn't do much for them.

  • I'm going to have to admit to never having read Clean Code. It's just never appealed to me. I did read some of UBs articles a fair number of years ago. They did make me think - which I'd say is a positive and along the lines you are putting forwards.

    Rigidity and "religious" zeal in software development is just not helpful I'd agree.

    I do however love consistency in a codebase, a point discussed in "Philosophy of Software Design", I always boil this down to, even if I'm doing something wrong, or suboptimal, if I do it consistently, once I realise, or it matters I only have one thing to change to get the benefit.

    It's the not being able to change regardless, in the face of evidence, that separates consistency and rigidity (I hope)!

  • I don't know why people take UB seriously. He never provided proof of any work experience - he claims to have worked for just a single company that... never shipped any code into production. Even his code examples on GitHub are just snippets, not even a to-do app (well, I think that his style of "just one thing per function" works as a self-fulfilling prophecy).

    Maybe people like him are the reason why we have to do leet code tests (I don't believe he would be capable of solving even an easy problem).

    • Uncle Bob is one of the core contributors to Fitnesse, which had moderate success in the Java popularity era back in the day.

      Also, you do understand that people worked as software engineers even before Github became popular, or open sourcing to begin with, do you? So if someone is 60+ year old, chances are that most of his work has never been open sourced, and his work was targeting use cases, platforms, services which have no utility in this age any more.

      Which have all nothing to do with how good a software engineer someone is.

      And finally, do you have any proof that he never shipped any code into production?

      8 replies →

  • Another example of not quite pragmatic advice is Screaming Architecture. If you take some time to think about it, it’s actually not a good idea. One of the blog posts I’m working on is a counter argument to it.

  • > It's striking to me how out of touch Martin seems to be with the realities of software engineering in this transcript. Stylistic refactors that induce performance regressions, extremely long and tortured method names for three-line methods, near-total animus towards comments ... regardless of who is right/wrong about what, those takes seem like sophomoric extremism at its worst, not reasoned pragmatism that can be applied to software development in the large.

    I think you're talking out of ignorance. Let's take a moment to actually think about the arguments that Uncle Bob makes in his Clean Code book.

    He argues in favor of optimizing your code for clarity and readability. The main goal of code is to help a programmer understand it and modify it easily and efficiently. What machines do with it is of lower priority. Why? Because a programmer's time is far more expensive than any infrastructure cost.

    How do you make code clear and easy to read? Uncle Bob offers his advise. Have method names that tell you what they do, so that programmers can easily reason about the code without having to even check what the function does. Extract low-level code to higher level methods so that a function call describes what it does at the same level of detail. Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions.

    Overall, it's an optimization problem where the single objective is defined as readability. Consequently, it's obvious that performance regressions are acceptable.

    Do you actually have any complain about it? If you read what you wrote, you'll notice you say nothing specific or concrete: you only throw blanket ad hominems that sound very spiteful, but are void of any substance.

    What's the point of that?

    > Maintainability/appropriate factoring are subjective qualities that depend a lot on the project, the other programmers on it, and the expectations around how software engineering is done in that environment.

    The problem with your blend of arguments is that guy's like you are very keen on whining and criticizing others for the opinions they express, but when lightly pressed on the subject you show that you actually have nothing to offer in the way of alternative or guideline or anything at all. Your argument boils down to "you guys have a style which you follow consistently, but I think I have a style as well and somehow I believe my taste, which I can't even specify, should prevail". It's fine tha you have opinions, but why are you criticizing others for having them?

    • > Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions

      This may be true for some cases, but I don't see a non-contrived way for code to describe why it was written in the way it does or why the feature is implemented the way it is. If all comments are bad, then this kind of documentation needs to be written somewhere else, where it will be disconnected from the implementation and most probably forgotten

      5 replies →

    • > The main goal of code is to help a programmer understand it and modify it easily and efficiently. What machines do with it is of lower priority.

      This mentality sounds like a recipe for building leaky abstractions over the inherent traits of the von Neumann architecture, and, more recently, massive CPU parallelism. Bringing with it data races, deadlocks, and poor performance. A symptom of this mentality is also that modern software isn‘t really faster than it should be, considering the incredible performance gains in hardware.

      > Comments is a self-admission you failed to write readable code

      I‘m not buying this. It’s mostly just not possible to compress the behavior and contract of a function into its name. If it were, then the compiler would auto-generate code out of method names. You can use conventions and trigger words to codify behavior (eg bubbleSort, makeSHA256), but that only works for well-known concepts. At module boundaries, I‘m not interested in the module‘s inner workings, but in its contract. And any sufficiently complex module has a contract that is so complex that comments are absolutely required.

      2 replies →

    • Code like Uncle Bob suggests is not easier to read and understand, it is harder, IMO and that of many others. Since the disagreement starts from this any further discussion is impossible.

      4 replies →

    • > Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions

      # This is not the way I wanted to do this, but due to bug #12345 in dependency [URL to github ticket] we're forced to work around that.

      # TODO FIXME when above is done.

      Oh no, I so failed at making self-descriptive code. I'm sorry, I totally should've named the method DoThisAndTHatButAlsoIncludeAnUglyHackBecauseSomeDubfuckUpstreamShippedWithABug.

    • > What machines do with it is of lower priority. Why? Because a programmer's time is far more expensive than any infrastructure cost.

      This assumes that code runs on corporate infrastructure. What if it runs on an end user device? As a user I certainly care about my phone's battery life. And we aren't even talking about environmental concerns. Finally, there are quite a few applications where speed actually matters.

      > Comments is a self-admission you failed to write readable code, and you can fix your failure by refactoring code into self-descriptive member functions.

      Self-explaining code is a noble goal, but in practice you will always have at least some code that needs additional comments, except for the most trivial applications. The world is not binary.

    • I'm not going to spend a long time responding to your comment, since it seems accusatory and rude; if you modify it to be more substantive I'll happily engage more.

      The one specific response I have is: it's not that I

      > say nothing specific or concrete: [I] only throw blanket ad hominems that sound very spiteful

      ...rather, it's that I'm criticizing Martin's approach to teaching rather than his approach to programming. I expand on that criticism more in an adjacent comment, here: https://news.ycombinator.com/item?id=43171470

  • > sheer rigidity

    That looks more like a communication style difference than anything else. Uncle Bob's talks and writing are prescriptive -- which is a style literally beaten into me back when I was in grade school, since it's implied just from the fact that it's you doing the speaking that you're only describing your opinions and that any additional hedging language weakens your position further than you actually intend.

    If you listen to him in interviews and other contexts where he's explicitly asked about dogmatism as a whole or on this or that concept, he's very open to pragmatism and rarely needs much convincing in the face of even halfway decent examples.

    > animus toward comments

    Speaking as someone happy to drop mini-novels into the tricky parts of my code, I'll pick on this animus as directionally correct advice (so long as the engineer employing that advice is open to pragmatism).

    For a recent $WORK example, I was writing some parsing code and had a `populate` method to generate an object/struct/POCO/POJO/dataclass/whatever-it-is-in-your-language, and as it grew in length I started writing some comments describing the sections, which for simplicity's sake we'll just say were "populate at just this level" and "recurse."

    If you take that animus toward comments literally, you'll simply look at those comments and say they have to be removed. I try to be pragmatic, and I took it as an opportunity to check if there was some way to make the code more self-evident. As luck would have it, simply breaking that initial section into a `populate_no_recurse` method created exactly the documentation I was looking for and also wound up being helpful as a meaningful name for an action I actually wanted to perform in a few places.

    That particular pattern (breaking a long method into a sequence of named intermediate parts) has failure modes, especially in the hot path in poorly optimized runtimes (C#, Java, ..., Python, ...), and definitely in future readability if employed indiscriminately, but I have more than enough experience to be confident it was a good choice here. The presence in my mind of some of Uncle Bob's directionally correct advice coloured how I thought about my partial solution and made it better.

    > other animus

    - Stylistic refactors that induce performance regressions can be worth it. As humans, we're pre-disposed to risk avoidance, so let's look at an opposite action with an opposite effect: How often are you willing to slow down feature velocity AND make the code harder to maintain just to squeeze out some performance (for a concrete example, suppose there's some operation with space/time/bandwidth tradeoffs which imply you should have a nasty recursive cte in your database to compute something like popcount on billion-bit-masks, or even better just rewrite that portion of the storage layer)? My job is 80% making shit faster and 10% teaching other people how to make shit faster, but there are only so many hours in the day. I absolutely still trade performance for code velocity and stability from time to time, and for all of those fledgeling startups with <1M QPS they should probably be making that trade more than I do (assuming it's an actual trade and not just an excuse for deploying garbage to prod).

    - The "tortured method names" problem is the one I'm most on the fence about. Certainly you shouldn't torture a long name out of the ether if it doesn't fit well enough to actually give you the benefits of long names (knowing what it does from its name, searchability), but what about long names which do fit? For large enough codebases I think long names are still worth the other costs. It's invaluable to be able to go from some buggy HTML on some specific Android device straight to the one line in a billion creating the bug, especially after a couple hiring/firing sessions and not having anybody left who knows exactly how that subsystem works. I think that cutover point is pretty high though. In the 100k-1M lines range there just aren't enough similar concepts for searchability to benefit much from truly unique names, so the only real benefit is knowing what a thing does just from its name. The cost for long names is in information density, and when it's clear from context (and probably a comment or three) I'm fine writing a numeric routine with single-letter variable names, since to do otherwise would risk masking the real logic and preventing the pattern-recognition part of your brain from being able to help with matters. HOWEVER, names which properly tell you what a thing does are still helpful (the difference between calling `.resetRetainingCapacity()` and `.reset()` -- the latter you still have to check the source to see if it's the method you want, slowing down development if you're not intimately familiar with that data structure). I still handle this piece of advice on a case-by-case basis, and I won't necessarily agree with my past self from yesterday.

    > "Uncle Bob devotees" vs "Uncle Bob"

    This is maybe the core of your complaint? I _have_ met a lot of people who like his advice and aren't very pragmatic with it. Most IME are early-career and just trying to figure out how to go from "I can code" to "I can code well," and can therefore be coached if you have well-reasoned counter-examples. Most of the rest IME like Uncle Bob's advice but don't code much, and so their opinions are about as valuable as any other uninformed opinion, and I'm not sure I'd waste too much time lamenting that misinformation. For the rest of the rest? I don't have a large enough sample I've interacted with to be very helpful, but unrelenting dogmatism is pretty bad, and people like that certainly exist.

    • Thanks for the thoughtful response. I generally don't want to get into the specifics of what Martin advocates for. Whether to prefer or eschew comments, give methods a particular kind of names, accept a performance penalty for a refactor--those are all things that are good or bad in context.

      I think a lot of engineers hear "there's a time and a place" or "in context" and assume that I'm saying that the approach to coding can or should differ between every contribution to a codebase. Not so! It's very important to have default approaches to things like comments, method length, coupling, naming, etc. The default approach that makes the most sense is, however, bounded by context, not Famous Author's One True Gospel Truth (or, in many cases, Change-Averse Senior Project Architect's One True Gospel Truth). The "context boundary" for a set of conventions/best practices is usually a codebase/team. Sometimes it's a sub-area within a codebase. More rarely, it's a type of code being worked on (e.g. payment processing code merits a different approach from kleenex/one-off scripts). Within those context boundaries, it's absolutely appropriate to question when contributors deviate from an agreed-upon set of best practices--they just might not be Martin's best practices.

      Rather, the core of my critique is that Martin's approach lacks perspective. Perspective/pragmatism--not some abstract notion of "skill level in creating well-factored code according to a set of rules"--is the scarce commodity among the intermediate-seeking-senior engineers that Martin's work is primarily marketed toward and valued by.

      From there, I see two things wrong with Martin's stance in the Osterhout transcript:

      "Out of touch" was not an arbitrarily chosen ad-hominem. When Osterhout pressed Martin to improve and work on some code, Martin's output and his defense of it were really low-quality. I can tell they're really low quality because, in spite of differing specific opinions on things like method length/naming/SRP, almost everyone here and to whom I've showed that transcript finds something seriously wrong with Martin's version, while the most stringent critique of Osterhout's code I've seen mustered is "eh, it's fine, could be better". That, and Martin's statements around the "why" of his refactors, indicate that the applicability of his advice for material code quality improvements in 2025 (as opposed to, say, un-spaghettification of 2005 PHP 5000-line god-object monstrosities) is in doubt. On its own, that in-applicability wouldn't be a massive problem, which brings me to...

      Second, Martin is a teacher. When you mention '"Uncle Bob devotees" vs "Uncle Bob"' and I talk about the rigidity I see in evidence among people that like Martin, I'm talking about him as a teacher. This isn't a Torvalds or Antirez or Fabrice Bellard-type legendary contributor discussing methodological approaches that worked for them to make important software. Martin is first and foremost (and perhaps solely) a teacher: that's how he markets himself and what people value him for. And that's OK! Teachers do not have to be contributors/builders to be great teachers. However, it does mean that we get to evaluate Martin based on the quality of his pedagogical approach rather than holding the ideas he teaches on their own merit alone. Put another way, teachers say half-right things all the time as a means of saving students from things they're not ready for, and we don't excoriate them for that--not so long as the goal of preparing the students to understand the material in general (even if some introductory shortcuts need to later be uninstalled) is upheld.

      I think Martin has a really poor showing as a teacher. The people his work resonates the most strongly with are the people who take it to the most rigid, unhealthy extremes. His instructorial tone is absolute, interspersed with a few "...but only do this pragmatically of course" interjections that he himself doesn't really seem to believe. His material is often considered, in high-performing engineering departments, to be something that leaders have to check back against being taken too far rather than something they're happy to have juniors studying. Those things speak to failures as a teacher.

      Sure, software engineers are often binary thinkers prone to taking things to extremes--which means that a widely regarded teacher of that crowd is obligated to take those tendencies into account. Martin does not do this well: he proposes dated and inappropriate-in-many-cases practices, while modeling a stubborn, absolutist tone in his instruction and responses to criticism. Even if I were to give his specific technical proposals the greatest possible benefit of the doubt, this is still bad pedagogy.

I'm currently dealing with one of those codebases representative of the consequences of blindly following "Clean Code", et. al.

My experience has taught me that you never want to be the first person to recommend a rewrite. Since I am a mere contractor on this one, I am strongly inclined to let it unwind on its own. There seems to be a lot of ego embedded in those pointless data access layer wrappers. I'd hate to get on someone's bad side right now. The market is quite rarified.

  • Rewrite is only useful it you get something else that you can't get otherwise. Mixing Rust and C++ in a project is hard but doable - odds are if you try it you will find enough "friction" that eventually it will be worth rewriting to get rid of one.

john ousterhout's book is the only book on how to write software that has any actual evidence behind it. i highly recommend it as the only book to read on how to write code. and uncle bob, well, best to avoid his stuff as much as possible. clean code takes away about 5 years from every dev's life as they think they need to read it to become an intermediate developer and one they realize that is not the way, can they finally grow.

  • That book really poisons the mind. Even if there's some good things to learn in there, it's stashed among a lot of advice that is either plain bad or needs asterisks. But there aren't really any asterisks and instead it presents what look like rules that you shouldn't be breaking if you want to be a good programmer.

    When I first read the book I'd already been programming for 10 years, but I was in my first job out of college. I'd heard a lot about the book and so I trusted what it had to say. I let it override how I would have written code because I figured coding professionally was just far different than what I would consider the best way to write code.

    Interestingly, 5 years sounds about right for how long it took me to finally start trusting my own judgement. I think it was a combination of being more confident in myself but also because I was doing larger projects and it was more frequent that I was putting down a project and then coming back a couple months later. That's how I was able to see how bad the code was to work with once my mental model of it had flittered away.

    Now I take a much less strict approach to my code and I find it a lot better to work with later.

    • > instead it presents what look like rules that you shouldn't be breaking if you want to be a good programmer.

      I see this a lot, especially among more junior programmers. I think it likely stems from insecurity with taking responsibility for making decisions that could be wrong. It makes sense, but I can’t help but feel it is failing to take responsibility for the decisions that are the job of engineering. Engineering is ultimately about choosing the appropriate tradeoffs for a specific situation. If there was a universally “best” solution or easy rule to follow, they wouldn’t need any engineers.

      3 replies →

  • "john ousterhout's book is the only book on how to write software that has any actual evidence behind it."

    This is false and hopefully no one takes you seriously when they read that. There are books about empirical methods for software engineering, for example, which actually seek to find real evidence for software engineering techniques. See Greg Wilson's work, for example.

    There are lots of other architecture/design books that use real world systems as examples. "Evidence" is definitely lacking in our field, but you can find it if you try.

    • Greg Wilson indeed is tremendously helpful in facilitating "the industry" to think about our craft:

      https://github.com/gvwilson

      edit: wow, in his project "It will never work in theory" he's fairly sober about the ability of "the industry" to reflect on "the craft"

      https://neverworkintheory.org/

      > about the project:

      > People have been building complex software for over sixty years, but until recently, only a handful of researchers had studied how it was actually done. Many people had opinions—often very strong ones—but most of these were based on personal anecdotes or the kind of "it's obvious" reasoning that led Aristotle to conclude that heavy objects fall faster than light ones.

      in the 2024 retrospective:

      > Conclusion

      > The comedian W.C. Fields once said, “If at first you don’t succeed, try, try again. Then quit. There’s no point in being a damn fool about it.” Thirteen years after our first post, it is clear that our attempts to bridge the gulf between research and practice haven’t worked. We look forward to hearing what actionable plans others have that will find real support from both communities.

      1 reply →

  • 5 years is about right.

    when i found a copy of clean code in a bookstore, it only took me a few minutes to put it back. I had read John Ousterhout's book prior.

  • In typical HN commenter smugness. It took me less than that to realise it was bullshit. It didn’t make things clear, it made them more abstract and more resistive to change. Similarly with DDD. Just build what you need and deal with the consequences of inevitable change later. No one cares if you miraculously perfectly modelled your “definitely the final form” of your domain from day 0.

    Oh and TDD?! Ah yes those perfectly defined unit cases you write for implementation details. The best comment I read recently (sorry I can’t find it) something akin to “The first unit I write is to validate the intended side effects through properly exercising associated mocks”

    As with everything there is no “best way” to do something, but in software engineering… there are far more bad “best ways” than best “best ways”

    • DDD is a good way to extract the business logic from the implementation.

      By modelling the business you raise the business logic up to a 1st class element.

      By implementing the business objects you encapsulate their functionality in the business.

      The words "Account" or "Outstanding Balance" have business meanings. Modelling them allows you to express the business logic explicitly.

      It also allows you to create tests that are related to that business logic, not the implementation.

      You can still "build what you need and deal with the consequences of inevitable change later".

      Model what you need to build, the business is going to have to make changes to that model to implement their changes, IT systems are a detail.

      Change by extending and changing the DDD models.

      To reverse the question, how do you write code that "does what you need" without understanding the domain?

      1 reply →

As someone who's recently started to read a philosophy of software design, I have to say that a lot of the points the author makes are things that I've come to learn with experience, which feels pretty good. As opposed to clean code, which I read when I was starting, and although at that time it felt good to have some guidelines—I still think having some guidelines is better than having nothing at all.

I think you grow out of that advice very soon, because it's not very practical, it feels out of touch. The result is not code that is easier to read, quite the contrary. I think the Java world has been influenced for worse by him.

But I don't have anything against him, as other comments say, the problem is dogmatism and trying to follow these authors blindly instead of thinking about it.

Some software gurus really grind my gears, and Robert Martin is one of them. When confronted with bad advice he gave, he's quick to say it's not meant to be taken literally. Then, gurus like Kent Beck, say that you cannot criticize their approaches if you don't implement exactly as they say. So, while this is not exactly a paradox (different people with different opinions), I feel like gurus make their livings on unfalsiable claims while shaping the world of software engineering.

So, some kudos to Robert for accepting criticism and discussing it, but no cigar for downplaying his own advice when confronted - I also recall a different discussion, where someone confronts his statement "you don't practice tdd you're not a professional", and his answer "it was not meant to be taken seriously".

These people had great ideas but they should be more critical of themselves, eg "here's when not to apply this", "here's where to bend this", not "you're doing it wrong" or "don't take it literally".

  • The more I tried to implement Clean Code, the more it helped me appreciate Worse is Better approach, which comes from an observation, not a dogma: while all programmers strive for simplicity both in implementation and interface, when they come into conflict, simple implementation usually wins over simple interfaces, as they are easier to modify.

    https://dreamsongs.com/RiseOfWorseIsBetter.html

  • I don't understand how that works on people. Dog whistling, "Hey, hey, leave me to my grift."

The arguments from staunch clean code zealots have wasted so much time on PRs that I have lost count. Hours and hours and sometimes weeks - PRs having ideological discussions on something that neither the underlying machine cares nor the end user.

Multiply that across the industry and that probably easily reaches in hundreds of millions of dollars productivity wasted.

Ps: Not advocating cowboy coding or spaghetti code either.

Plenty of people are ragging (justifiably) on Clean Code, but I really admire by contrast Ousterhout's commitment to balanced principles and in particular learning from non-trivial examples. Philosophy of Software Design is a great and thought-provoking read.

I find it funny how much people obsess over Clean Code. In my opinion Robert Martin's Clean Architecture is a much more valuable and realistic idea than all this madness about 3 line functions, no comments, do one thing, etc. I would take the ugliest code that followed Clean Architecture over any "Clean Code" that didn't bother sensibly separating business logic and I/O.

I don't like the guy very much, but for web development even just mostly following Clean Architecture does so much to keep things from devolving into chaos long term.

One method of commenting that has paid off for me the most was inserting links to:

1. the online documentation of the function being called

2. the instruction documentation for an instruction being generated, inserting

3. the issue that the code fixes

4. the specification of what the function is trying to implement

Then I fixed my text editor to enable click on those links.

  • I also fixed the disassembler to also add a clickable link to the instruction spec page for each instruction.

    • I did that back in the day (before we could click on anything) about what the compiler was thinking as it generated the code. That was fun.

Something I find odd about Uncle Bob's style is the preference for reading and modifying shared state over pure functions that take args. It makes me do a double take when I read a method registerTheCandidateAsPrime() (taken from UB's rewrite) that doesn't take a candidate arg.

How would you unit test those methods? You'd have to directly set the field values, then call the method, then assert on the fields. If the answer is "you don't unit test private methods" then that's completely fine, because I agree with that (perhaps this is implicit from the private keyword, I don't know Java). But I'm struggling to imagine how you would get to those private methods with such a strict adherence to TDD as Bob recommends. Methods like increaseEachPrimeMultipleToOrBeyondCandidate() are quite complex, and would be tricky to build up using TDD if you couldn't exercise them directly.

If nothing else, surely Bob's approach is not thread safe. Call PrimeGenerator3.generateFirstNPrimes() concurrently and they'll trample all over each other. John Ousterhout's stateless version doesn't have that problem.

For anyone like me who at first skipped over this article because it seems from the title that someone just compared two approaches:

No, it’s an actual debate between the actual John and Bob. Them debating each other. It’s an amazing read.

I can just tell that John Ousterhout works with much better developers on average than UB and that probably informs their biases.

  • He also works with a lot more students, with student-sized projects and problems and code lifetimes. He's used his book for classes, I think it's on a level appropriate for a freshman.

    Both books are bad, but APOSD is my most disliked technical book ever. CC is at least interesting as an exercise to see that critics are way too uncharitable. Kernighan and Pike's The Practice of Programming is far better than either. And https://antirez.com/news/124 is one of the few good discourses on comments out there, something as a profession we care way too much about when the cost of doing it "wrong" is typically so low.

    • What's there to dislike so much in APOSD?

      The book struck me as giving mostly reasonable advice, none is which was overly prescriptive. None of the things I disagreed with struck me as egregious.

      9 replies →

    • While reading APoSD, one of my thoughts was that it walks up to, but never gets to the point of advocating for Literate Programming, and that resolving how the author feels about that presentation would make for a better and clearer text.

      Apparently, there is something of a tension at Stanford in that freshmen are being taught to keep methods/functions short, while the course on software design has as a pre-requisite CS140 which in turn requires CS 107 or EE 108B and CS107 requires CS106B, so it probably couldn't be taken until almost halfway through a four-year degree (and there is a note on the course page that preference will be given to those graduating in the near term).

      That said, there is value in laying out basic principles and premises, _and_ the experiences which in turn support them. Reading through your link, it seems to line up well with my understanding of recommendations for comments in APoSD, which makes one wonder how it could be made to work as a text for an introductory course in some language which was approachable by beginners.

There’s obviously a balance. Having worked in both environments, I tend to appreciate the code of someone who at least read the books, but treats it as suggestion rather than gospel. Contrast to someone who never read the books, has no clue what’s “good” and hacks everything.

On the one hand, the books are popular because a lot of people reading them think it makes a good point, and share that view. On the other hand, just because something is popular doesn’t make it right! I think this is where AI gets so much wrong. GIGO! If you base all your code on whatever is most common, are you really really sure that common pattern is really the best? AI, and these book evangelist, often have no clue. Just parroting others.

I’d rather deal with “principles” as opposed to “rules” every time. Glean the principles from the books, and at least try to write clean code!

Instead of "Clean Code" I'd really suggest people read either

  - Code Complete
  - The Pragmatic Programmer

https://en.wikipedia.org/wiki/Code_Complete

https://en.wikipedia.org/wiki/The_Pragmatic_Programmer

  • I wouldn't recommend Code Complete today; I think The Practice of Programming covers most of the same material, is much shorter, is much better written, and isn't tainted by McConnell's later embrace of snake-oil methodologies, some of which made it into the second edition of CC. TPOP didn't exist when CC changed my world.

    • It's been a hot minute since I've read Code Complete. I don't have it on hand, but I'm pretty sure it was the second edition as it has the gray cover. And I'm pretty sure I got the second edition closer to when it was published than today.

      I remember it being pretty decent back in the day. I can't remember any takes that were too hot in it. Honestly, I can only remember a general sense of satisfaction(?) with the book. If you were to ask me what exactly I took from Code Complete and applied in my job today, I couldn't tell you.

      What would you classify as "snake oil" in it? Do they recommend Hungarian notation or something weird?

      4 replies →

  • I read Clean Code and don't remember a single thing from it. To be fair it was a while ago.

    But the SOLID and Clean Architecture principles inform me almost daily.

I’ve come full-circle back to my junior engineer attitude with respect to coding “best practices”: Avoid anything resembling dogma.

"Uncle Bob" is not a software engineer (as he calls himself) and anything he says on the subject is theoretical at best, and snake oil at worst. Can anyone point to any substantial piece of code that he wrote before he can be taken seriously. The code pieces at his GitHub repos, other than style, etc., are just simplistic stuff.

It is probably OK to be thinking on issues related to a field (i.e., software engineering) without being a practitioner in the field, but producing fads-du-jour and selling them as solid (pun intended) theories and expecting to be taken seriously is just ludicrous to me.

I'm surprised that Ousterhout doesn't point out the huge problem introduced with the PrimeGenerator3 refactor: It stores state in static (!) fields, so it's completely unusable in the presence of threads, unless you add a global lock.

Even if Uncle Bob thinks tiny methods are great, why would he introduce the pseudo-constructor "initializeTheGenerator" and make everything static if he needs state? If the helper methods were instance methods instead, the static "generateFirstNPrimes" method could simply construct a new instance to store the state.

The example they use is irrelevant. A solved problem can be written how ever one likes.

Code that will change or can’t ever be considered final, is the real challenge.

Overly cutting code into methods makes code just rigid. This could be the point, I guess, but if you need to change the methods name in order to reflect the methods intent, than you just wrote the classic unhelpful comment of:

// check a is not null

if (a != 0) { … }

Overuse of comments has the same issue as overuse of methods.

Without rigor, comments and methods names will start to lie.

Because their content / name weren’t necessary to understand the code. And should just not exist in the first place.

> For me, the fundamental goal of software design is to make it easy to understand and modify the system. I use the term "complexity" to refer to things that make it hard to understand and modify a system.

This explains everything that's wrong with modern software.

When you design a Formula 1 race car engine, the purpose of engine design is not to "make the engine easier to modify". It's to win races. And that depends on the race - a funny car engine, a formula 1 engine, a LeMans engine, Nascar engine, etc, are all different because the races are different.

Another example: when you design a building, the goal isn't to make it easier to understand the building. The goal is to meet the requirements of the building, its uses, requirements, environment, etc. Sometimes a better building is just more complicated, and making the architect or builders' jobs easier, while nice, isn't the goal.

Some things aren't supposed to be easy to understand, because ease of understanding is not the goal of the thing. Focus on the real goal of the thing, and achieving that; don't get distracted by ancillary goals.

  • Software is unusual in that it's never finished. This makes ease of modification a critical quality of software in a way that it isn't for Formula 1 race car engines or most buildings.

    Ease of modification was one of the top priorities in the design of the Model T Ford, because cars break down and must be repaired, and a car that is difficult or impossible to repair will cost its owner large sums of money. Software doesn't break down (though online services do) but for other reasons modification is a high priority.

    Perhaps short-lived buildings don't need to be easy to modify, especially if the architects have a very good understanding of the needs of the users over their lifetimes. Often that is not the case, though, and Christopher Alexander was famous in large part because much of his career was devoted to figuring out how to enable inhabitants of buildings to modify them more easily, so that their needs would eventually be met even if the architects guessed wrong decades in the past. Centuries-old stone farmhouses exist, too, and ease of modification is crucial for them; if they cannot be modified they cease to function in only a century or two at most.

    Whatever the goal of your software, it is crucial for the people who are modifying it over time to achieve that goal to be able to understand it.

  • A genius architect/Formula 1 engineer can design a building/engine that fits all the requirements astonishingly perfectly, but if the builders have difficulties understanding it, or later contractors can't figure out how to maintain or fix anything, then it's only a perfect design in theory and an awful design in reality. It's not a nice-to-have to make people's lives a little easier, it defines the success of the project. The genius architect/engineer can insist that the complex design reflects the underlying domain as much as they want but at some point they will have to back that up to someone else who isn't a genius.

    Obviously, writing code such that a first year comp sci student can understand what's happening and can start contributing immediately is absurd, but at the same time nobody builds anything in a vacuum. There's a certain legibility required within any context you're designing something for.

  • Most people aren't building Formula 1 cars. Buildings are a better analogy: they are designed to be maintained. You can replace a door handle without replacing the door or the wall, you can turn off the power to different sections to do repairs. Dangerous or complex parts are labelled, moved into their own rooms or cupboards, and locked.

  • This is what the soft in software means. It needs to be able to be adapted and change with its requirements.

    My favourite story from a previous job was working with high-speed scanners, OCR, data quality, typo correction - that kind of thing.

    The pipeline did its job well enough. But later we got a new contract where we accepted email submissions (where the customer had scanned stuff themselves). The guys couldn't hook that into the pipeline, so they set up a new pipeline - to print out the emails so they could then be scanned into the existing pipeline.

    I still get a good laugh about that to this day.

  • Writing software is more akin to building an engine factory than a single engine. At least, much of business software is used over a long period with lots of changes needed. Maybe writing a single game is more comparable to the race engine.

The prime number code hurts to read. I feel like Bob is living in a different reality than most of us.

  • Yes, no, I think he has different mental capabilities than most (most of the commenters here at least) and by that actually are living in a different reality. Human brains function vastly different. Two examples stood out to me.

    1) UB said he reads the code in its full from left to right with the if(isTooHot) example. I only resort to reading code in that way as a last resort if I really can't figure out what the code is doing. I mean I look at a block or row and take it in more as a whole.

    2) UB said comments are annoying because he has to read them and keep the whole of the comment text in his mind. This again says he reads everything left to right, and he can likely store everything that he has read up to a certain amount.

    My mind works nothing like that. I can hold very few words in my working memory but can instead hold concepts/ideas. For that to work good I need to see as much of the involved code as possible and my mental image evaporates if I have to navigate too far where I started.

Lots of negative comments about Uncle Bob in this thread. I personally didn't like Clean Code and really enjoyed A Philosophy of Software Design, but I do think that some of his other books are really solid.

I accept that non-fiction books on anything will oversell the value of their way, and try to take what I can at a more moderate level. Through that lens, Clean Code didn't give me much, but Clean Architecture did. The Clean Coder is also an interesting read on professionalism in software, and Clean Agile is an interesting read on Agile roots. I don't know anyone that practices "true" agile (nor do I care to do so myself), but there are some really solid ideas in there.

I get that Clean Code kind of had a cult-like following in that people followed it blindly, but damn some of these comments are just rude about Uncle Bob. I still think he's a pretty good author and has given me some advice through his other books that helped me a lot as a fresh faced dev.

  • Uncle Bob has been a horrible influence on our industry, and we’re expressing that. He has not actually worked on anything but yet he’s been able to make some money selling his inexperienced opinions.

While I enjoyed the discussion as an exercise in stripping back positions to underlying principles. I find it a great irony that the overarching reason why they diverge on what is "good practice" is not discussed.

John sounds like he is about to start building a new type of database, and Bob sounds like he's knee deep in a 20 year old code base for a logistics company. Both of their positions are reasonable, and both optimized for specific contexts.

I found Bob's responses more measured (which I value a lot), with John's at times being more compelling. I do agree that over-composition is a real problem that Bob is on the wrong side of the line on. But to be fair, Bob and Clean Code comes from a time where it was the opposite and his position on this feels like a philosophy that has an over-correction (albeit - not necessarily a flaw) at it's core.

IMO the "PrimeGenerator" example from Clean Code is horrendous and completely unreadable! This would be so much better as a single method/function with a few interspersed comments that explain the algorithm. I mean, just look at this abomination:

  private static boolean
  isMultipleOfNthPrimeFactor(int candidate, int n) {
    return candidate ==
      smallestOddNthMultipleNotLessThanCandidate(candidate, n);
  }

Not only is the method itself completely pointless, it also happens to have side effects! Who would expect this from the method name? So much for self-documenting code... Ousterhout rightfully calls him out on this bullshit.

In fact, Ousterhout makes such great points that I really want to read his book. Conversely, I'm now even less inclined to read Clean Code.

  • > This would be so much better as a single method/function with a few interspersed comments that explain the algorithm.

    I haven't read the whole article when I wrote that comment. Turns out that Ousterhout provides a rewritten version of "PrimeGenerator" that does exactly this. At least UB concedes that this version is indeed much better.

This was such a riveting and literary read, I enjoyed it and couldn’t put it away, like a novel where I was invested in the characters!

Are there any other such reads in the software engineering field?

They are both mostly right but the devil is in the details and to try to not get too dogmatic about things. For example function length is one of those things that you can obsess about and debate endlessly.

What's the value of extracting a function that is used only once or twice. It's probably very limited. It's debatable whether that even should be a public function and whether you should encourage more use. And then we can look at the function declaration as well. Does it have a lot of parameters? Is there any complexity to its implementation? Does it have tests? Are there going to be lot of uses of the function? If the answer to all those questions is no, you could probably inline it without losing much. But the flip side is that you wouldn't gain much by doing so. A small function that is used a lot is probably somewhat valuable.

And there's a third thing that needs to be considered: does a function increase the API surface of your module. Having lots of private functions makes your module hard to understand. Having lots of public functions, makes the API less cohesive.

So, there's a grey area here. Languages like Kotlin give you a additional options: make it a nested function, make it an extension function, put it in a Companion object, etc. You can put functions in functions and those can help readability. The whole point of doing that is preventing usage outside the context of the outer function. Nested functions should probably be very short. And their only goal should be to make the outer function logic more readable/understandable. It's not something I use a lot but I've found a few uses for this. There's no point to using nested functions other than for readability.

And speaking of Kotlin, it's standard library is full of very small extension functions. Most of them are one or two lines. They are clearly valuable because people use them all the time. You get such gems as fun List.isNullOrEmpty(): Boolean which helps make your if statements a lot more readable and less flaky. Also works on Java lists. Stuff like that is a big part of why I like Kotlin.

I tend to dumb down a lot of advice like both are debating here to cohesiveness and coupling. In the context of functions, you get coupling via parameters and side effects (e.g. modifying state via parameters) instead of return values. And you lose cohesiveness if a single function starts doing too many not so related things. High coupling and low cohesiveness usually means poor testability. You'll find yourself mocking parameters just to be able to test a function. Improving testability is a valid reason for extracting smaller, easier to test functions.

It's always fascinating to me to see this subject talked about, because I've been programming for years in a niche field (audio plugin DSP development) and have interacted with Clean Code programmers, but I seemingly cannot grasp what they do at all.

This is to the point that, in order to program and do the things I want to do, I have to essentially write nearly everything out longhand, to the point of unrolling things in repetitive fashion, and organizing things in blocks of code separated by comments about what's being done in each block. I can do this so predictably and regularly that my code gets parsed by other people's more Clean Code and ingested as sort of blocks of program behavior to be used in other software, to the point where it's an 886-star repo with 79 forks: not Bob-scale, but then I haven't written books or revolutionized corporate coding.

I've had to learn useful things about where my approach doesn't take advantage of its hypothetical strengths: heedlessly unrolling everything doesn't give you speed boosts, and I've had to learn to declare variables nearer to where they're used. But I've also had to learn that I could do the opposite of Clean Code for performance gains. Back in the day, you could assign variables for calculations to avoid Repeating Yourself, but on modern processors it turns out… in addition to techniques like running calculations in parallel on wide data words that contain different data processed together… you can even take advantage of how eager CPUs are to do math, to avoid creating extra variables. It can be more efficient to just do the math a couple times rather than create a whole new variable just to skip the math.

This world makes sense to me. It acts like assembly language, except it's C (not even C++). I don't know to what extent there are other people who think this way, or struggle to keep track of even simple abstractions.

It's just the context with which I see Bob acolytes, rather than just declaring a variable to not do the math twice, breaking it off into about twelve different methods for seemingly purely semantic reasons, and insisting anything else is stupid. And there I am, producing and re-using reams of shockingly primitive code that seems to work and where I can return to it, even a couple decades later, and have no trouble figuring out what I did.

There's something to be said for being SO stupid that your work just works.

I was just thinking about what AI assisted coding brings to the design discussion, especially as AI become more and more powerful and we rely on it more and more. You still want to make things modular and easy to understand so that AI understands it easily and the needed info to modify a module can fit in a relatively small context window, but the difference is that it is very easy to make large scale measurement about which code style is understood better by an LLM, so maybe some of these debates will be decided relatively objectively!

For the topic: the discussed topics are relatively trivial surface level stuff, mostly I agree with POSD, but these will be handled by AI anyway. I guess humans will use the spare brain capacity to deal with the real deep design questions (for a while).

buried at the end: there's a planned second edition of Clean Code! Given Bob's intransigence in this conversation, I wonder what he'll change.

My take on the prime example:

    import itertools
    
    def generate_n_primes(n):
        """
        Generate n prime numbers using a modified Sieve of Eratosthenes.
    
        The algorithm keeps track of a list of primes found so far,
        and a corresponding list of 'multiples', where multiples[i] is a multiple of primes[i],
        (multiples[i] is initially set to be primes[i]**2, see the optimisations section below).
    
        The main loop iterates over every integer k until enough primes have been found,
        with the following steps:
        - For each prime found so far
        - While the corresponding multiple is smaller than k, increase it by steps of the prime
        - If the multiple is now the same as k, then k is divisible by the prime -
            hence k is composite, ignore it.
        - If, for EVERY prime, the multiple is greater than k, then k isn't divisible by any
        of the primes found so far. Hence we can add it to the prime list and multiple list!
    
        There are a few optimisations that can be done:
        - We can insert 2 into primes at the start, and only iterate over every odd k from there on
        - When we're increasing the multiple, we can now increase by 2*prime instead of 1*prime,
        so that we skip over even numbers, since we are now only considering odd k
        - When we find a prime p, we add it to the prime and multiple list. However, we can instead add
        its square to the multiple list, since for any number between p and p**2, if it's
        divisible by p then it must be divisible by another prime k < p
        (i.e. it will be caught by an earlier prime in the list)
        """
    
        # Insert 2 into primes/multiples
        primes = [2]
        multiples = [4]
    
        # Iterate over odd numbers starting at 3
        for k in itertools.count(3, 2):
            # If we've found enough primes, return!
            if len(primes) >= n:
                return primes
    
            # For each prime found so far
            for i in range(len(primes)):
                # Increase its corresponding multiple in steps of 2*prime until it's >= k
                while multiples[i] < k:
                    multiples[i] += 2 * primes[i]
    
                # If its corresponding multiple == k then k is divisible by the prime
                if multiples[i] == k:
                    break
            else:
                # If k wasn't divisible by any prime, add it to the primes/multiples list
                primes.append(k)
                multiples.append(k ** 2)
    
        return primes

Some might find the docstring as well as comments too much - I find the comments help relate the code to the docstring. Open to suggestions!

On reflection, my attitude to books like these indicated where I was in my understanding of programming. They used to be useful life-buoys that one clung to for dear life early in one's career in a fast-moving and often-changing industry. Then they become an interesting side-note reminding one of what they clung to as the good precepts that served them well stand out from the rest of the books. And finally they become unnecessary and seemingly dogmatic when one has become adept at swimming. In other words, essential reading depending on where you find yourself :) Disclaimer: out of these two, I only read Clean Code.

Uncle Bob probably the biggest scammer in Software. What a complete pile of garbage. So much energy wasted in all these design patterns, SOLID and other OOP bullshit.

Turns out you can just pass immutable data in and get immutable data out. Who would have guessed? The whole 90s - 00s Java OOP garbage still gives me nightmares

"That's a valid concern. However, it is tempered by the fact that the functions are presented in the order they are called. Thus we can expect that the reader has already seen the main loop and understands that candidate increases by two each iteration."

I think this missed the point entirely. If i had to read the entire code to understand the behavior of that method, then is it really cleaner? Side-effects are evil

UB says at one point:

> Would that we had such a crystal ball

And then it seems like he actually found his crystal ball, because in the very next question he refers to things that have not yet occurred in the conversation:

> interpreting your rewrite (below)

And later:

> In your solution, which we are soon to see below

This makes it somewhat confusing to read, with answers being based on counterpoints that will only have been made in the future. (Which, I suppose, is similar to the problem Ousterhout has with UB's PrimeGenerator example.)

The primary goal of software design should be to facilitate understanding and modification for future developers, emphasizing the importance of code readability.

This is actually a great read.

I'm in the middle of designing a course for a client on teaching software engineering best practices for data scientists (and folks who live in Jupyter all day).

There seems to be a huge lack of material for these types that aren't "programmers", don't live in an "IDE", and are essentially writing code all day.

"Do One Thing" is to me maybe best understood in the context of the Single Layer Of Abstraction Principal - it has helped me numerous times to be very intentional about following SLAP in complex code, and Do One Thing seems to fall very naturally out of it.

Crucial context here: Ousterhout is one of the great programmers who built the free software world we live in today, and Uncle Bob is a faker. Ousterhout is not without his problems (Stallman famously called him a "parasite" on the free software community, as well as fervently disagreeing with his technical taste) but he's written truly world-changing software. By contrast, Uncle Bob is a windbag book author who has never managed to write any software worth using, to my knowledge.

Ousterhout to Uncle Bob:

> maybe you were surprised that it is hard to understand, but I am not. Said another way, if you are unable to predict whether your code will be easy to understand, there are problems with your design methodology.

This debate is full of treasures like this. What a brilliantly clear and understated way to expose charlatanism!

What is this “world-changing software” I'm saying Ousterhout has shipped? Tcl. (Hold on, now, don't downvote just yet.) Tcl has been a crucial enabling technology for EDA and automated regression testing since literally the 01980s. Probably every VLSI chip in the computer you're reading this on was designed, verified, and tested with workflows involving unholy amounts of Tcl. GCC's test suite is also Tcl. Still.

Automated testing in the 01980s? Yes. It's true that automated testing wasn't very prevalent in the software world until the Agile guys (Uncle Bob and his less incompetent compatriots) popularized it around the turn of the century, but EEs and compiler engineers have been pervasively automating testing a lot longer than that, and Tcl was for a long time the least awful option, believe it or not. And that was John Ousterhout's doing.

Do you know what the SPICE developers did to make SPICE scriptable, before there was Tcl? They linked csh into it. Motherfucking csh. If you've never tried to maintain a large script in csh, you do not know the meaning of suffering.

Good programmers write good software; bad programmers write bad software, or no software. Ousterhout has written one of the few pieces of software that can be called great. (In its historical context. In 01978 csh was great software too.) What software has Uncle Bob written?

Listening to Uncle Bob's programming advice over Ousterhout's would be like listening to your middle-school English teacher's writing advice instead of Stephen King's. It's not that King could never give you worse advice, but if you need your English teacher's advice, generally your judgment will not be good enough to distinguish the rare occasions King gets it wrong.

  • I'ma big Tcl fan, but Ousterhout has created many other important things - see https://en.wikipedia.org/wiki/John_Ousterhout .

    • I don't think Magic and Raft are anywhere close to the importance of Tcl, though I've probably at some point used a chip that was designed in Magic. And, while I like Tk and find it inspiring, the number of Tk apps I can remember ever using (that I didn't write myself) is maybe three, and they weren't very important to me.

      As for Sprite-LFS, I really enjoyed the Sprite LFS paper and found it inspiring, but my conclusion was that Seltzer's followup BSD-LFS paper falsified some of its more surprising claims, and ultimately the underlying predictions about the relative trends in RAM size and disk size turned out to be wrong, undercutting the key advantages of the LFS approach overall. Vaguely LFS-like approaches are important to SSDs and SMR disks, but WAFL was already about that LFS-like in 01995 (which is admittedly after Sprite-LFS), and SSD FTLs also do some not-very-LFS-like things. So ultimately I don't think Sprite-LFS turned out to be that important.

      Sprite as a whole I'm less able to evaluate. I've never been an OS researcher, but I've spent a fraction of my life reading SOSP and HotOS papers and systems dissertations, and I don't remember seeing anything that came out of Sprite except Sprite-LFS. I was thinking maybe doors in Solaris did, but no, that was Sun's Spring, not Sprite. Other side of the Bay, where Ousterhout took Tcl eventually. So it's possible Sprite was a great achievement, but I haven't noticed it. But I think more likely it's one of those things where we tried the "obvious" thing (SSI across a bunch of workstations) and found out why it was bad, which influenced later efforts like PVM, MOSIX, Beowulf, distcc, MapReduce, Ceph, etc., because Sprite stepped on the mines so they didn't have to. There's a nice retrospective (by Ousterhout, natch) at https://web.archive.org/web/20150225073211/http://www.eecs.b....

      So I don't think Tk, Magic, Raft, and Sprite-LFS really have the same level of significance as Tcl. Sprite maybe.

      I don't think it's bad to spend a lot of time and effort on things that turn out to not be very significant, for two reasons. One is that, after a long enough time, very little indeed remains very significant. (Who, today, can recount the disappointments of the Minoan queens?) The other is that things you could do that are significant—even for a little while—are usually things that will probably fail. So if you spend a lot of time doing things that might be significant, you'll fail at most of them.

      But in Ousterhout's case, one of those things did succeed brilliantly, and it was Tcl.

Cleanliness AND Design is Highly correlated with IO, side effects and state.

Most programmers know about this in 2025 but they didn't back then. Looks like the authors don't even mention it.

I've been programming since the early 80s and have never seen real world, production code that was done in so-called "clean code" style

It's sad that we demoted the field from engineering to philosophy. But it is what it is.

Next step - fashion and belief.

  • philosophy is the basis of reason, math, and science. it's sad that "engineers" don’t understand it or it's import.

    • Engineers believe in definitions. By definition, philosophy is not a scientific discipline, because as soon as a discipline becomes scientific it... stops being philosophy.

      As Alexander Pyatigorsky famously wrote, "the value of philosophy is in that nobody needs it".

      1 reply →

  • It's an improvement over demagoguery and blind rule-following.

    Moreover, the book argues for engineering principles (in pretty much all possible senses of that phrase).

    • Sure!

      I'm not saying that philosophy is bad. Maybe making software just never meant to become an engineering discipline. I mean making clothes, laws, and music isn't. And it's fine.

      But engineering does imply some rule-following.

> I bemoan the fact that we must sometimes use a human language instead of a programming language. Human languages are imprecise and full of ambiguities. Using a human language to describe something as precise as a program is very hard, and fraught with many opportunities for error and inadvertent misinformation.

This quote from Uncle Bob is shameful, considering that he has made 100% of his career on writing English, not code.

  • An interesting contrast to it is Ousterhout's observation:

    >If you can visualize a system, you can probably implement it in a

    >computer program.... This means that the greatest limitation in writing

    >software is our ability to understand the systems we are creating.

    Though interestingly it is in marked contrast to a different statement in the "Software Design Book" Google mailing list:

    >John Ousterhout, Aug 21, 2018, 12:30:15 PM

    >I've never felt that graphs are a particularly useful way of describing software structure.

    >The interactions between classes end up so complicated that the graph becomes an unreadable mess.

    >Also, I'm not sure that the complexity of a graph representation of software correlates with its

    >practical complexity (the graph representation might look very complicated, but the software might

    >still be pretty easy to maintain).

    and I'd be interested if someone knows of a text/video/interview which resolves that twain, or what sort of visualization is advocated for/recommended.

    • Structure graphs are rarely useful for me, but visualizing the data flow is how I think about code in general. Sometimes it's graph-like, but more wishy-washy and I'm only holding the relevant parts for the task at hand in my head rather than everything.

All great, but generaly useless in most big corp project with offshoring, where we are already happy that we actually delivered something that works in first place.

This prime generating code has fascinated me for a while.

It was first described by E.W. Dijkstra "Notes on Structured Programming (EWD249), 2nd; 1970; TH Eindhoven". https://www.cs.utexas.edu/~EWD/ewd02xx/EWD249.PDF

And then reformulated by D.E. Knuth "Literate Programming; 1984" http://www.literateprogramming.com/knuthweb.pdf

Both use it to demonstrate their way of deriving and documenting an algorithm.

And then R. Martin used it again in his book "Clean Code; 2008". (Though I'm not 100% certain what he wants to demonstrate with his rather difficult formulation.)

For the sake of this email I am going to call it "Dijkstra's algorithm".

If we look at it from an engineering perspective, then Dijkstra's algorithm is never the best choice (at least not in 2025).

If performance is not the most important aspect, e.g. if we need less than 100000 primes, then the straightforward trial division is just as fast - and soooo much simpler (basically impossible to get wrong). Note that this is different in 2025 than it was in 1970, back then multiplications and divisions were much more expensive so it made sense to minimize them.

If performance is the most important aspect, then the sieve of Eratosthenes is much faster. And you can implement the sieve with a sliding window, which is just as fast (or even a bit faster because of caching) and uses a bounded amount of memory.

Concretely - my implementation of the sliding window sieve of Eratosthenes is about 50 times faster than Dijkstra's algorithm (on the same machine) when generating the first 200 million primes (7.5 sec vs 6 min).

The reformulation - both by Ousterhout and Martin - computes multiples for every new prime found. This will quickly lead to overflow, e.g. the 6543-th prime is 65537, so its square will overflow 32 bit unsigned integers. Dijkstra's formulation on the other hand only computes multiples that are actually used to filter, the multiples are never (much) larger than the candidate.

Note that computing the multiples so eagerly will also make the multiples vector unnecessarily large (wasting memory).

Knuth and Dijkstra both remark that there is a subtle problem with the correctness of the algorithm. When incrementing lastMultiple and then later accessing primes[lastMultiple] und multiples[lastMultiple] it is not obvious that those will already be assigned. In fact they will be, but it is very difficult to prove that. It is a consequence of the fact that for every integer n there is always a prime between n and n*n, which follows from Betrand's theorem, a difficult number-theoretical result.

So if you look at Ousterhout's and Martin's reformulation and think "a yes - now I get the algorithm", then beware: You've missed at least two aspects that are relevant for the algorithm's correctness. ;-)

PDSD is correct on length of methods. The methods given in an example in CC are ridiculously short. CC is more correct on comments than PDSD. Especially mandating comments in certain places leads to very low quality and, frankly, utterly disgusting comments point out, helpfully that the 'get_height' method 'gets the height'. CC is more correct on TDD than PDSD. The noticed danger of just focussing on implementation details over the structure of the API is always there but TDD has a refactor step to fix that. The general idea of working in small steps with there being a safe state between every small step is worth its weight in gold.

I think the reason most people have a problem with Uncle Bob is because they know their own practices are a far cry from his recommendations, and they take his prescriptive and uncompromising advice as a personal attack.

I also wonder how many people interpret his advice as that of a mindless, pedantic dictator.

My own introduction to UB was from some random YouTube video he made about programming languages, so my first impression of him included his humor and his ability to see both sides of an issue while being unafraid of having a strong opinion. I really enjoy speaking with and listening to people with strong, long-marinated opinions, regardless of whether I agree or not. At the very least it means they've put a lot of thought into it, which makes for better discussion and learning.

I also lack a long history of code commits, being more of a dabbler here and there, so perhaps I have a smaller surface area for UB's jabs to land upon. Still, I acknowledge that a Clean Code Nazi would probably rip me to shreds for some of the practices I've followed and some that I continue to follow. But improvement is a much more achievable goal than perfection, and gleaning valuable information is better than being dogmatic.

In the end I love listening to UB talk. I don't follow all his practices but I do keep them in the back of my mind. If not worth following strictly they are always worth considering, especially the intent behind them.

So when I see his opinions on comments or his opinions on abstraction and variable naming my first instinct is not to lament about how he is poisoning our youth or insulting my code, but rather to ask myself how I can make use of his perspective. I'd encourage others to do the same; it's much more fun that way, not just for programming but for everything.

As for those of you stuck in "Clean Code" hell with oppressive supervisors demanding strict adherence... that sounds like a personal failure, or a personal incompatibility, or both. I would blame the messenger there, not the message.

  • My problem with UB is that by promoting his brand and his style so much to junior developers (in both subtle and fairly overt ways), he's warped the mind of a lot of people in a way that effects them for a long time and the people around them. I don't really have anything personally against UB, but dealing with the code that one of his acolytes creates is deeply frustrating and often they claim it's "best practice" even though there's basically no validation around most of his claims and if you read the criticism, they're fairly common sense.

What a great discussion between two prominent figures in the field of software design. Thank you for posting this!

  • You guys really hated that I found the discussion interesting?

    • It seemed like a content-free comment. I was no better informed after reading it than before, and it was of no artistic or cultural value. It did not induce me to question any of my assumptions or investigate anything. It expressed your experience, but your experience was not unusual or surprising in any way, except perhaps that you did not know that "Uncle" Bob Martin was an incompetent charlatan. Possibly those were among the reasons people downvoted it.

      1 reply →