Comment by ilitirit

9 months ago

It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.

Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

And it's not just programming styles, patterns and idioms. It's arguably even worse when it comes to tech stacks and solution architecture.

It's super-frustrating when I'm dealing with people in a professional setting and they're quick to point out something they read in a book, or even worse - a blog - with very little else to add.

This was especially bad during the NoSQL and Microservice hype. Still somewhat feeling it with PAAS/SAAS and containerization. We have so many really really basic things running as Function Apps or lambdas, or simple transformations running in ADF or Talend that add zero value and only add to the support and maintenance overhead.

Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it. And that their opinions were written down don't make them fact. Apply your own mind and experience.

I cringe thinking about PR comments I left early in my career.

"akshually this should try to follow more SOLID principles"

But, coming from a formal engineering background, I thought this is what it meant to be a professional software engineer. Little did I know these "principles" were just the musings of a consultant lol. Turns out most folks have good intentions and want a standardized way to write code, but for some reason it always results in code that looks like the Enterprise FizzBuzz meme repo.

  • For some reason in software there seems to be an incredibly large space for non-evidence based thinking and belief systems.

    I wonder if that's because in a lot of cases (depending on the domain) the space of possible valid/working solutions is near infinite, and if you don't have hard requirements that are backed up by measurements you're free to concieve of any valid system structure and justify it as 'better' without that ever being something that can be observed and measured.

    • > For some reason in software there seems to be an incredibly large space for non-evidence based thinking and belief systems.

      The secondary problem is that book authors have become extremely good at inventing pseudo-evidence to support their claims. It most commonly takes the form of “I talked to X companies with Y total number of employees over Z years and therefore I know what works best”.

      If you cut out all of the grandstanding, it’s nothing more than “just trust me” but in a world of social proof it sounds like it’s undeniable.

      3 replies →

  • The mark of a good engineer is knowing when this sort of handwaving is actually meaningful and helpful. Formality for its own sake is anti-pattern, but who am I telling?

    > It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.

IMO, this is one of the key differences between the two books. CC has a vibe of hard and fast opinion-based rules that you must obey, whereas APoSD feels more like empirically-derived principles or guidelines.

  • APoSD is written by a highly respected computer scientist with a tremendous list of technical achievements and also a strong teaching history as a professor, while CC was written by someone whose resume is primarily related to writing about software, not writing software.

I think it stems from fundamental misunderstandings about what it is one is actually trying to do when writing code.

Coding is about building a computable model of some facet of existence, usually for some business. When it comes to model building, comprehension and communication are paramount. Performance and other considerations are also important but these are arguably accidental features of machines and, in an ideal world, would not actually affect our model.

Similarly, in an ideal world, we wouldn't even need programming languages. We'd be able to devise and explain computational systems in some kind of perfect abstract language and not need to worry about their realization as programs.

I think a lot of these blanket philosophies confuse people by not emphasizing the higher level aspects of the activity enough. Instead people get hung up on particular patterns in particular paradigms/languages and forget that the real goal is to build a system that is comprehensible to the community of maintainers that need to work with it.

  • It seems that each software design/development system, ideology, and practice has a good reason it was created, and has certain inherent benefits. Each may solve (or at least help with) some common problem.

    For instance, abstraction is good and short methods are good to some extent (who wants to read a 2000-line function?), but as John points out in the article, these can be taken too far, where they create new and perhaps worse problems.

    It seems there's a pendulum that swings back and forth. We go from big up front design, to Extreme Programming, to a pervasive object-oriented design culture, back to other paradigms.

Yes, Uncle Bob is certainly capable of being pedantic. A friend of mine, a Smalltalk Consultant, partnered with him for a while. "With Uncle Bob, it's his way or the highway."

His clean code work is certainly pretty dogmatic. As I recall, he says that Java is not object oriented.

But if my memory serves me correctly, his book about C++ (Designing Object-Oriented C++ Applications Using the Booch Method) has some excellent parts. His description of the difference between a class and an instance is one of the better ones.

Then there is the famous sudouko puzzle incident, in which a student trying test-driven development can't get the solution. It is a very instructive incident which illustrates the TDD is unlikely to help you solve problems that are beyond incremental changes. Peter Norvig's solution makes that very clear. Uncle Bob does not seem to realize that.

> Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

But I admit in my youth, I was pretty dogmatic about languages and development practices, so I've been that guy.

  • >> ... TDD is unlikely to help you solve problems that are beyond incremental changes.

    Thank you for expressing this niggling problem with TDD. Personally I just cannot use it for "new stuff", I need to explore and create direct with "real" code for anything non-obvious.

    • I suppose it depends on how you use it.. If you view it like you would a bit of paper and a pencil then it works a treat.

      I want this to do X when I ask it with Y param. If you write out your spec's before you sit down to code.

      When a user enters a bad password... print this to screen

      TDD works great for new stuff when viewed as paper replacement.

  • > Java is not object oriented.

    Java technically isn't OO in the strictest sense (Smalltalk, Ruby). It is OO in the modern sense (where modern >= 1980s, C++). Though I am not sure if this is what Bob is referring to - I don't have any respect for the man or his ideas, so my biased guess is his definition of OO is shared only between him and his fans.

  • >But if my memory serves me correctly, his book about C++ (Designing Object-Oriented C++ Applications Using the Booch Method) has some excellent parts.

    If my memory serves me correctly, Grady Booch himself had a book with roughly the same title, except that his name would not be in the title, of course, but would be there as the author. I think I read a good amount of it long ago, and liked it.

    Edit: I googled, the book is mentioned here under the section Booch method :

    https://en.m.wikipedia.org/wiki/Grady_Booch

  • Bob's had a long life with too much success. He really believes in himself. But, I have to say that the other guy was aggressive and bad even though I am more inclined to agree with him. He willfully misrepresented Bob's ideas. I thought he presented more misguided certainty than Bob. No Bueno.

    • The "other guy" is John Ousterhout, author of the Tcl scripting language.

      Although I can see why you might consider him more "aggressive", I personally think it matters much more that he was, in general, far more descriptive of his reasoning.

      Merely having an opinion is the easy part; being able to clearly articulate the reason(s) why one has a particular opinion is far more important, especially in this kind of conversation, and I in that regard I repeatedly found UB lacking.

    • I didn't read it as aggressive, more like engaged in the discussion, he really wanted to get to the bottom of things and get his views and opinions challenged, but UB was a bit defensive and closed. Even though UB has strong opinions he wasn't as engaged in building a case for them, he just tried not to budge or maybe he conceded smaller points and then retreated to a less dogmatic stance.

  • Ah, yes. The famous Sudoku solver controversy.

    In 2006 Ron Jeffries wrote four blogs about solving Sudoku in Ruby with TDD. His blogging effort ended before he completed the solution. I think he got interested in something else and left the whole thing hanging.

    That same year Peter Norvig wrote a Sudoku solver in Python using a constraint based approach. You can see their respective documents here. https://ronjeffries.com/categories/sudoku/… https://norvig.com/sudoku.html

    The anti-TDD lobby, at the time, hailed the two documents as proof that TDD didn't work. Ha Ha, Nya Nya Boo Boo.

    I was aware of this silliness, but never bothered to study it. I had better things to do. Until March of 2020. Then I thought I'd use Sudoku as a case study for Episode 62 in http://cleancoders.com.

    I had not read either of the previous documents and decided to maintain that ignorance while writing the solver in Clojure using TDD. It turned out to be a rather trivial problem to solve. You can see my solution in http://github.com/unclebob/sudoku

    I don't know why Ron stopped blogging his Sudoku solver in 2006; but he picked it up again in 2024 and has written much more about it.

    The trick I used to solve Sudoku with TDD was to consider the degenerate cases. Sudoku is usually a 3x3x3x3 grid. Let's call this a rank-3 problem. I started with a rank 1 problem which is trivial to solve. Then I moved on to a rank 2 problem which was relatively simple to solve; but was also very close to a general solution. After that I could solve rank N problems.

    The TDD strategy of starting with the most degenerate case (rank 1) and then gradually adding complexity may not have been well known in 2006. TDD was pretty new back then. If you explore Ron's first four blogs you can see that he briefly considered rank 2 but opted to go straight into the rank 3 case. The sheer number of variables for each test (81) may have played a role in his loss of interest. In my case (rank 2) I had far fewer variables to deal with.

  • Wasn’t that Ron Jeffries who failed to solve that?

    I think that says more about the person at the keyboard and their lack of familiarity with the solution space than anything about TDD per-se. You still need insight and design with TDD, blind incrementalism was never a good idea.

    • I agree.

      I have used TDD professionally in several development teams. It's useful in the right team. TDD works well when you are not too dogmatic about it. As with everything, you need people in the team that are experienced enough to know when and where. I think the same is true for any tool, coding standard, best practice or what have you. You have to know when to deviate.

      I've also held entry courses at university level teaching introductory programming. I believe that TDD can be a good tool teaching programming. Students tend to sit down and write a complete program and then start debugging. TDD teaches them to write small bits at a time and test as they go.

    • What UB's description of how to do TDD does not suggest that there are problems that require a different level of thinking and TDD as he describes, does not account for that.

To restate something I've said here last month:

I'm fond of saying that anything that doesn't survive the compilation process is not design but code organization. Design would be: which data structures to use (list, map, array etc.), which data to keep in memory, which data to load/save and when, which algorithms to use, how to handle concurrency etc. Keeping the code organized is useful and is a part of basic hygiene, but it's far from the defining characteristic of the craft.

  • I disagree entirely. Design is fundamentally a human-oriented discipline, and humans work almost exclusively with code before it is compiled. A strong shared mental model for whatever we're doing is as much a part of software development as any code that runs on a computer.

    Programming languages can (should!) be amazing tools for thought rather than just tools for making computers do things; using these tools to figure out what we're doing is a critical part of effective development. The highest-leverage software engineering work I've seen has involved figuring out better ways of thinking about things: developing better tools and abstractions. Tools and abstractions compound since they fundamentally impact everything built on top of them. A good high-level design is the difference between a team that can add some specific capability in a day, a team that would take six months and a team that would say it cannot be done.

    • I agree, I recently keep having the thought that by far the hard part of programming is code organisation. Whether that's where the files go, or how you've wrapped up commonly used idioms like validation or data access in easy to use abstractions.

      It's so easy to get it spectacularly wrong and end up in a mess.

      And it's seems so deceptively pointless at the start. It's easy to throw together a greenfield project and get something working, but make a complete mess of organisation. But then it becomes so expensive so quickly to then make changes to that code.

      I've joined quite a few projects after 1/2 years of someone else making the project. And so often it's such an imposing mass of code that basically does sod all. Bad architects who don't understand why they're even using the patterns they are or mid/junior-level coders making projects is basically a recipe for the project just grinding to a halt just when it looks like you're getting near the end.

      It's when 1,000s of lines are easily refactored to 100s that you start thinking, how can these people honestly believe they have the ability to lead a project? They are so clearly completely out of their depth it's depressing.

      We seem, as an industry, to have a complete inability for management to distinguish genuine senior developers from people who will never be.

  • My take is that the book also works as a source of authority for aspiring SSR and SR devs.

    Comments about code style are usually subjective, and, they can be easily dismissed as a personal preference, or, in the case of a Jr dev, as a lack of skill.

    Until they bring up "The Uncle Bob book". Now, suddenly, a subjective opinion from a Jr dev looks like an educated advice sourced from solid knowledge. And other people now have a reason to listen up.

    All of this is totally fabricated, of course. But it's like the concept of money. It's valid only because other people accept it as valid.

  • > Keeping the code organized is useful and is a part of basic hygiene, but it's far from the defining characteristic of the craft.

    I'm with you, but I don't think it makes sense to elevate one absolutely over the other as the "defining characteristic." Either one can tank the development of a piece of software and prevent it from coming into being in a useful way.

    Arguments about which aspects of software are more important than others usually arise between people who have personally suffered through different ways that projects can fail. Any aspect of software development will feel like the "defining characteristic" if it threatens to kill your project.

    • > Any aspect of software development will feel like the "defining characteristic" if it threatens to kill your project.

      That does not make sense to me. There can be thousand things that can kill project. One has to consider what are the odds for them.

      1 reply →

  • Granted, while I program a lot, I'm not employed as a programmer per se. My impression is that programming is easy and fun, but software develoment is hard and laborious. Things like hygiene are among the differences between the two.

    • 100%. Dealing with legacy issues is much more laborious and also complicates hygiene.

  • Systems need to be able to handle all kinds of stresses placed on them during their useful life. The runtime bytecode/machine code/config is what deals with the actual running of the system. The code is what deals with the engineers making future modifications to it. The monitoring system deals with being able to allow operators to ensure the system stays up. All of these affect the reliability and performance of the deployed system during its lifetime. All of them are a part of the design of the system.

  • > code organization

    It's also the code documentation.

    Having documentation that is legible is good, right? And so a reviewer is reasonable to say "this is hard to read" since it's failing at its primary purpose.

Professionals in other industries don't "just" write books. In a sense that usually the field has several acclaimed authors and they put some solid work into ensuring their books make sense. While there are disagreements in other fields, or some nonsense conventions, the conventional wisdom is usually at least good enough to make you a good professional.

In programming it's the Wild West. Many claims are made based on nothing at all. It's very rare to see any kind of sensible research when it comes to the science part of CS. But following rules makes life easier. Even if rules are bad. That's kind of why conservatism exists as a political idea.

> Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it. And that their opinions were written down don't make them fact. Apply your own mind and experience.

But that difference is actually huge. I think you are downplaying the value of the writing process. Assuming that writer is acting in good faith and truly tries to provide the best possible information.

But when you start writing, you start noticing that this idea might not be that good after all. Maybe I need to read more about this? Did you note everything? This idea conflicts with this other topic than I just wrote? What is correct? And the list goes on. When you structure all your thoughts as written text, it is easier to detect all conflicting ideas and mistakes. Not all writers are that good, but you should understand what I mean.

  • Writing is an excellent way to determine your opinions. There's a large gap between ideas and those formed when writing said ideas.

This is just how junior and intermediate devs behave. It’s like a goth phase or something.

It goes along with being into BJJ, chess, vim, keto, linters, and “the dominance hierarchy”.

It’s annoying, but most everyone went through it. If you didn’t know better, how could they?

> Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

Not once in my 11 year career. But almost every codebase I've worked on has had debilitating maintainability issues because the only principle other engineers seemed to follow was DRY, at the sacrifice of every principle in SOLID.

  • Most code that i clean up is a lot easier to maintain after making it a lot DRYer.

    The point is not about being DRY, on itself, though. The point is that the code then has better abstractions which are easy to reason about.

    UB seems to take abstractions a lot too far, replacing e.g. 2 lines of very clear code with some cleartotals abstraction.

    • DRY is about ensuring that the same code doesn't have to change in two places because the engineer changing it in one place might not know that. But so many applications of DRY mindlessly violate the single responsibility principle and create coupling where there shouldn't be.

      1 reply →

    • clearTotals() arguably made more sense than other "abstractions", on the grounds that if you have more than one piece of state to reset/initialize, you want to centralize the knowledge of which variables must be set together - otherwise it's too easy to add another piece of state and forget to set it everywhere it should be.

      Of course, a method is but one of several ways you could capture that information, and not always the best one.

  • Lucky.

    I've had one which violated DRY and every SOLID principle…

    Well, Liskov might not have been violated, but it was hard to tell what with all the other nonsense in the 120 kloc of copy-pasted pantheon of god-classes that showed flagrant disregard for things so fundamental that you wouldn't think anyone even could even get them weird, e.g. the question of "how are properties defined" being "solved" by having an array which was indexed by named constants… and because it was a god class, which items in that array ever got instantiated depended on which value was passed to the constructor.

    Eventually, I found they'd blindly duplicated an entire file, including my "TODO: deduplicate this method" comments, rather than subtype — and their excuse when called out on this was the access modifier, as if changing "private" to "public" was hard.

Because its easy to be dogmatic, you don't need to think, consider the consequences or drawbacks, you just follow whatever the Supreme Leader told you to do.

Its incredibly simple to just follow whatever someone is telling you to do, sometimes I wish I could live like this so I didn't have to fight with the people that do all the time.

On my Macbook Pro M2, having a browser window on one half of the screen, and my IDE on the other, with a file tree viewer pane and another pane for my LLM tools, a terminal pane at the bottom... I've never been more pressed for real estate for my actual code editing pane. Even 80 chars has me scrolling horizontally. Secondary monitors help but not when you frequently work away from your desk.

  • Coding on a laptop, even a name-drop-tier status shibboleth, is most of your problem. You write code on a 15" screen when you must for physical/location reasons. You shouldn't ever choose to do it or design your workflow around that constraint.

    A 42" 4k TV (got it for $2-300 at the start of the pandemic) gives me four 80-90 column text windows on a mid-tier chromebook. You could not pay me enough to do that same work on a laptop, even a $4k MBP.

    (But yes, even with lots of real estate 80 columns is still a net win)

    • I have a 120" 4K monitor at home, and a 40" 2K. However, that entirely misses the point of my comment, which is that I am frequently away from my desk while working. I'm not sure what point you were trying to make.

      3 replies →

  • Why only use half the screen for your IDE? Your brain has to switch anyways so just make your IDE and browser fullscreen windows and alt-tab when needed.

    • In iterative web development the instant feedback is crucial and constantly alt-tabbing is tedious and breaks the flow. When necessary, it's simple to temporarily maximize the window.

>> Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it.

Well said!

The worst engineer I worked with was one who believed if he read it in a book, that opinion trumped anything else. Once he got so flustered he started yelling “come back to the discussion when you’ve read 13 books on this topic like me!” And it was something super mundane like how to organize config files or something.

Made every engineering planning session a pain in the ass.

For many C#, Java and C++ engineers Uncle Bob is their savior and GoF are the apostles.

Everything should follow SOLID and clean principles and be implemented using design patterns.

  • One of the best things I could do for myself is to go back in time and tell my younger self not to care so much about the "right" design pattern, or the perfectly DRY way to represent a piece of code. I was definitely my worst enemy for a long time, because I thought SOLID and the GoF design patterns were more important than writing code that is easy to understand without hopping across multiple files in case one day in the future your system needed to do something totally different with a new database or filesystem, etc. I started to look for places to add design patterns, rather than letting them develop naturally. Most of the software I built had no need for such heavy abstraction and complexity, and I've only ever had to switch database systems twice ever in 20 years, and the abstraction did not help reduce time or complexity all that much in the end. It definitely wasn't worth the up front planning compared to just rewriting the sections that directly handled the database.

    Maybe it's a right of passage to burn yourself badly enough over-architected solutions, where you finally start to understand you don't need all the complexity. Write the code for humans, as simple as you can. Keep large performance issues in mind, but only code around them when they become a problem or are extremely obvious. If anything, it's helped me to steer junior developers away from complex code, while encouraging them to try it out in their own time. Go ahead and figure things out on your own, but let's not do it on a shared codebase, please?

  • Which is unfortunate as there are no (legitimate) reasons to write C#, a multi-paradigm language, like this.

FWIW (not a lot), I do believe in a lot of these principles.

For example, even with widescreen monitors, it is still useful to limit line length. Why? Because many people will have multiple source files side-by-side on one of those widescreen monitors, at which point it makes sense for them to not run on indefinitely.

And of course, that is just a guideline, one that I break regularly. However, if it's a method with many args, I'll break the args onto their own lines.

However, the overriding concern is that an organisation works to code towards a common style, whatever that may be, so that unfamiliar code is predictable and understandable.

> idiots who froth at the mouth

That seems like an unnecessarily harsh way to refer to people.

  • Clean Code zealots are consistently some of the least likable, least productive, least pragmatic people I have ever worked with. I've had multiple clients where the whole team is threatening to quit unless the CC zealot is fired. And when they are fired guess what - bugs go down, shipped features go up, and meetings become productive. "Idiots who froth at the mouth" is an understatement IMO

    • This too seems like a fairly hardline stance to take. I think it's not surprising that you'd have a hard time collaborating with people you'd refer to as unlikeable, unproductive, unpragmatic, overzealous, frothy idiots.

      It reminds me of the standard joke about veganism: "how do you know someone is vegan? Don't worry they'll tell you"

      It's a very ironic joke, because the people I hear talking about veganism the most are non-vegans complaining about veganism. In this thread too. All I see is people complaining about how dogmatic clean code people are, and I see no examples of that in this thread. The only strong and absolute language I see is from those who are complaining about CC people.

      Bear in mind, I don't have a dog in this fight. I'm not vegan, my methods are sometimes longer than four lines, and I do occasionally write a comment. But if this thread is anything to go by, the clean code folks seem a lot nicer to work with than the reactionaries.

      5 replies →

    • I’ve seen someone take a glance at a codebase, declare it too complex, and suggest reading Clean Code.

      The entirety of the complexity was essential and dictated by an external data model used nation-wide for interoperability.

      Based on my experience and the OP dialogue, Uncle Bob is a vanity-driven narcissist and an infectious fraud.

> It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.

I love reading books for different perspectives.

However, I’ve come to despise people who read books and then try to lord their book knowledge over others. These are the people who think that they have the upper hand in every situation because they read some books. They almost always assume you haven’t read them. If you point out that you have also read them, they switch the subject to another set of books they read because they don’t like when someone tries to undermine their book knowledge superiority.

It’s even worse when the person reads books outside of their domain and tries to import that book knowledge into the workplace. The absolute worst manager I had was a guy who read a lot of pop-psychology books and then tried to psychoanalyze each of us according to those books.

> Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?

Honestly, no better indication of a very mediocre developer

  • My experience is that being fastidious about code formatting is independent of one's ability as a developer. i.e. not a good indicator either way.

    • I’ve noticed the worse someone is in a language, the worse they format it.

      People then develop fastidious code formatting rules because they realize well formatted code is easier to read and extend.

      Then people realize it’s the organization of the code, not the rules themselves. They have preferences, but don’t treat those preferences as “the one true way”.

      So people with fastidious rules are in that middle ground of becoming less bad, and that’s a wide swath of abilities.

      28 replies →

    • The best developer that I ever worked with, is "on the spectrum."

      His code was really anal.

      I have come to learn that there are no "rules," only heuristics.

    • I've inherited many sloppily formatted code bases that all contained mistakes which blended into the mess. Consistently styled code has the nice property that certain types of mistakes stand out immediately.

      For example, there's a large chunk of code following an if statement, but it's indented the same as the body of the if. The dev overlooks the closing brace and puts the logic in the wrong place. Additionally, there is a nested if statement whose body is indented less than the surrounding code. It's hard to read, and error prone.

      I can't imagine a "good" developer putting up with that, although I admit you don't have to be "fastidious" to prevent this type of thing.

    • I can't remember the last time I worked on a team without automatic linting enforced on check-in. Why would people choose to waste time arguing over that can be automated?

The place I used to work at had a "architect" who would for any question to a decision he made would refer to whatever it was as a "best practice."

Was often quite wrong and always infuriating.

Well I mean they wrote books about it and one guy had the audacity to call his opinion a “philosophy” even though it’s just an arbitrary opinion.

Most of software is about assigning big words and over complicated nomenclature to concepts and these things masquerade as things with deeper meaning when in reality it’s just some made up opinion.

Software design is an art. It is not engineering and it is not science. That’s why there’s so much made up bullshit. The irony is we use “art” to solve engineering problems in programming. It’s like ok we don’t actually know the most optimal way to program a solution here so we make up bs patterns and philosophies. But then let’s give this bs pattern some crazy over complicated name like Scientology or inversion of control and now everyone thinks it’s a formal and legitimate scientific concept.

Well cats out of the bag for Scientology. Not yet for a lot of the bs in software. A “philosophy” is the biggest utter bullshit word for this stuff I’ve ever seen.

  • But there are typical practices we agree are good: using a VCS, writing tests, write comments when needed, separate different level of abstractions, etc. Right? This comes from years of common experience in software.

    Over time we get to find patterns, common issues and ways to fix them, etc. It doesn't have to be strict patterns but overall strategies.

    If we don't do that then it's just vibes right? Where's the engineering part?

    • Nothing is fixed in stone. If you have strong typing and program with pure functions and immutability while utilizing union types and matching to the full extent you typically need very few unit tests for your code to work. You just need integration and e2e tests.

      I write very little unit tests as my coding style that employs static checks as viciously as possible doesn’t necessitate it.

      I would say only 30 percent of patterns are good and shared. The other stuff is just artistry and opinion. Like method name length or comments or OOP.

      13 replies →

  • The issue is that programming is communication. Communication is indeed a form of art. Programming is not just giving instructions to machines, if that was the case we would be happily using binary code. So we have two dimensions, the first one is giving the binary instructions, but the other one is how to make these instructions understandable by humans, including ourselves.

    • No there are other dimensions to coding. Communication is ONE dimension only.

      There is optimization and there is modularity. All 3 of these dimensions are intimately tied and correlated.

Square people are never going to agree with cool people. You can be cool and code some monstruosity or you can be square and say "we have to rebuild this entire project from scratch" everytime you see a long method.