Comment by jugg1es
5 years ago
Finally! I'm glad to hear I'm not the only one. I've gone against 'Clean Code' zealots that end up writing painfully warped abstractions in the effort to adhere to what is in this book. It's OK to duplicate code in places where the abstractions are far enough apart that the alternative is worse. I've had developers use the 'partial' feature in C# to meet Martin's length restrictions to the point where I have to look through 10-15 files to see the full class. The examples in this post are excellent examples of the flaws in Martin's absolutism.
You were never alone Juggles. We've been here with you the whole time.
I have witnessed more people bend over backwards and do the most insane things in the name of avoiding "Uncle Bob's" baleful stare.
It turns out that following "Uncle Sassy's" rules will get you a lot further.
1. Understand your problem fully
2. Understand your constraints fully
3. Understand not just where you are but where you are headed
4. Write code that takes the above 3 into account and make sensible decisions. When something feels wrong ... don't do it.
Quality issues are far more often planning, product management, strategic issues than something as easily remedied as the code itself.
"How do you develop good software? First, be a good software developer. Then develop some software."
The problem with all these lists is that they require a sense of judgement that can only be learnt from experience, never from checklists. That's why Uncle Bob's advice is simultaneously so correct, and yet so dangerous with the wrong fingers on the keyboard.
Agreed.
That's why my advice to junior programmers is, pay attention to how you feel while working on your project - especially, when you're getting irritated. In particular:
- When you feel you're just mindlessly repeating the same thing over and over, with minor changes - there's probably a structure to your code that you're manually unrolling.
- As you spend time figuring out what a piece of code does, try to make note of what made gaining understanding hard, and what could be done to make it easier. Similarly, when modifying/extending code, or writing tests, make note of what took most effort.
- When you fix a bug, spend some time thinking what caused it, and how could the code be rewritten to make similar bugs impossible to happen (or at least very hard to introduce accidentally).
Not everything that annoys you is a problem with the code (especially when one's unfamiliar with the codebase or the tooling, the annoyance tends to come from lack of understanding). Not everything should be fixed, even if it's obviously a code smell. But I found that, when I paid attention to those negative feelings, I eventually learned ways to avoid them up front - various heuristics that yield code which is easier to understand and has less bugs.
As for following advice from books, I think the best way is to test the advice given by just applying it (whether in a real or purpose-built toy project) and, again, observing whether sticking to it makes you more or less angry over time (and why). Code is an incredibly flexible medium of expression - but it's not infinitely flexible. It will push back on you when you're doing things the wrong way.
3 replies →
Yeah to some degree. I am in that 25 years of experience range. The software I write today looks much more like year 1 than year 12. The questions I ask in meetings I would have considered "silly questions" 10 years ago. Turns out there was a lot of common sense I was talked out of along the way.
Most people already know what makes sense. It's the absurdity of office culture, JR/SR titles, and perverse incentive that convinces them to walk in the exact opposite direction. Uncle Bob is the culmination of that absurdity. Codified instructions that are easily digested by the lemmings on their way to the cliff's edge.
The profession needs a stronger culture of apprenticeship.
In between learning the principles incorrectly from books, and learning them inefficiently at the school of hard knocks, there's a middle path of learning them from a decent mentor.
2 replies →
I've seen juniors with better judgment than "seniors".
But they were not in the same comp bracket. And I don't think they gravitated in the same ecosystems so to speak.
The right advice to give new hires, especially junior ones, is to explain to them that in order to have a good first PR they should read this Wikipedia page first:
https://en.wikipedia.org/wiki/Code_smell
Also helpful are code guidelines like the one that Google made for Python:
https://google.github.io/styleguide/pyguide.html
Then when their first PR opens up, you can just point to the places where they didn't quite get it right and everyone learns faster. Mentorship helps too, but much of software is self-learning and an hour a week with a mentor doesn't change that.
I've also never agreed completely with Uncle Bob. I was an OOP zealot for maybe a decade, and I'm now I'm a Rust convert. The biggest "feature" of Rust is that is probably brought semi-functional concepts to the "OOP masses." I found that, with Rust, I spent far more time solving the problem at hand...
Instead of solving how I am going to solve the problem at hand ("Clean Coding"). What a fucking waste of time, my brain power, and my lifetime keystrokes[1].
I'm starting to see that OOP is more suited to programming literal business logic. The best use for the tool is when you actually have a "Person", "Customer" and "Employee" entities that have to follow some form of business rules.
In contradiction to your "Uncle Sassy's" rules, I'm starting to understand where "Uncle Beck" was coming from:
1. Make it work.
2. Make it right.
3. Make it fast.
The amount of understanding that you can garner from make something work leads very strongly into figuring out the best way to make it right. And you shouldn't be making anything fast, unless you have a profiler and other measurements telling you to do so.
"Clean Coding" just perpetuates all the broken promises of OOP.
[1]: https://www.hanselman.com/blog/do-they-deserve-the-gift-of-y...
Simula, arguably the first (or at least one of the earliest) OOP languages, was written to simulate industrial processes, where each object was one machine (or station, or similar) in a manufacturing chain.
So, yes, it was very much designed for when you have entities interacting, each entity modeled by a class, and then having one (or more) object instantiations of that class interacting with each other.
> 1. Understand your problem fully
> 2. Understand your constraints fully
These two fall under requirements gathering. It's so often forgotten that software has a specific purpose, a specific set of things it needs to do, and that it should be crafted with those in mind.
> 3. Understand not just where you are but where you are headed
And this is the part that breaks down so often. Because software is simultaneously so easy and so hard to change, people fall into traps both left and right, assuming some dimension of extensibility that never turns out to be important, or assuming something is totally constant when it is not.
I think the best advice here is that YAGNI, don't add functionality for extension unless your requirements gathering suggests you are going to need it. If you have experience building a thing, your spider senses will perk up. If you don't have experience building the thing, can you get some people on your team that do? Or at least ask them? If that is not possible, you want to prototype and fail fast. Be prepared to junk some code along the way.
If you start out not knowing any of these things, and also never junking any code along the way, what are the actual odds you got it right?
>These two fall under requirements gathering. It's so often forgotten that software has a specific purpose, a specific set of things it needs to do, and that it should be crafted with those in mind.
I wish more developers would actually gather requirements and check if the proposed solution actually solves whatever they are trying to do.
I think part of the problem is that often we don't use what we work on, so we focus too much in the technical details, but we forget what the user actually needs and what workflow would be better.
In my previous job, clients were always asking for changes or new features (they paid dev hours for it) and would come with a solution. But I always asked what was the actual problem, and many times, there was a solution that would solve the problem in a better way
>Write code that takes the above 3 into account and make sensible decisions. When something feels wrong ... don't do it.
The problem is that people often need specifics to guide them when they're less experienced. Something that "feels wrong" is usually due to vast experience being incorporated into your subconscious aesthetic judgement. But you can't rely on your subconscious until you've had enough trials to hone your senses. Hard rules can and often are overapplied, but its usually better than the opposite case of someone without good judgement attempting to make unguided judgement calls.
You are right, but also I think the problem discussed in the article is that some of these hard rules are questionable. DRY for example: as a hard rule it leads to overly complex and hard to maintain code because of bad and/or useless abstractions everywhere (as illustrated in TFA). It needs either good experience to sense if they "feel good" like you say, or otherwise proven repetitions to reveal a relevant abstraction.
> and make sensible decisions
well there goes the entire tech industry
My last company was very into Clean Code, to the point where all new hires were expected to do a book club on it.
My personal take away was that there were a few good ideas, all horribly mangled. The most painful one I remember was his treatment of the Law of Demeter, which, as I recall, was so shallow that he didn't even really even thoroughly explain what the law was trying to accomplish. (Long story short, bounded contexts don't mean much if you're allowed to ignore the boundaries.) So most everyone who read the book came to earnestly believe that the Law of Demeter is about period-to-semicolon ratios, and proceeded to convert something like
into
and somehow convince themselves that doing this was producing tangible business value, and congratulate themselves for substantially improving the long-term maintainability of the code.
Meanwhile, violations of the actual Law of Demeter ran rampant. They just had more semicolons.
On that note, I've never seen an explanation of Law of Demeter that made any kind of sense to me. Both the descriptions I read and the actual uses I've seen boiled down to the type of transformation you just described, which is very much pointless.
> Long story short, bounded contexts don't mean much if you're allowed to ignore the boundaries.
I'd like to read more. Do you know of a source that covers this properly?
> Law of Demeter
"Don't go digging into objects" pretty much.
Talk to directly linked objects and tell them what you need done, and let them deal with their linked objects. Don't assume that you know what is and always will be involved in doing something on dependent objects of the one you're interacting with.
E.g. lets say you have a Shipment object that contains information of something that is to be shipped somewhere. If you want to change the delivery address, you should consider telling the shipment to do that rather than exposing an Address and let clients muck around with that directly, because the latter means that now if you need to add extra logic if the delivery address changes there's a chance the changes leaks all over the place (e.g. you decide to automate your customs declarations, and they need to change if the destination country changes; or delivery costs needs to updated).
You'll of course, as always, find people that takes this way too far. But the general principle is pretty much just to consider where it makes sense to hide linked objects behind a special purpose interface vs. exposing them to clients.
12 replies →
If you really want to dig into it, perhaps a book on domain-driven design? That's where I pulled the term "bounded context" from.
My own personal oversimplification, probably grossly misleading, is that DDD is what you get when you take the Law of Demeter and run as far with it as you can.
1 reply →
Three things that go well together:
Agree that the transformation described is pointless.
A more interesting statement, but I am not sure it is exactly equivalent to the law of Demeter:
Distinguish first between immutable data structures (and I'd group lambdas with them), and objects. An Object is something more than just a mutable data structure, one wants to also fold in the idea that some of these objects exist in a global namespace providing a named mutable state to the entire rest of the program. And to the extent that one thinks about threads one thinks about objects as providing a shared-state multithread story that requires synchronization, and all of that.
Given that distinction, one has a model of an application as kind of a factory floor, there are widgets (data structures and side-effect-free functions) flowing between Machines (big-o Objects) which process them, translate them, and perform side-effecting I/O and such.
Quasi-law-of-Demeter: in computing you have the power to also send a Machine down a conveyor belt, and build other Machines which can respond to that.[1] This is a tremendous power and it comes with tremendous responsibility. Think a system which has something like "Hey, we're gonna have an Application store a StorageMechanism and then in the live application we can swap out, without rebooting, a MySQL StorageMechanism for a local SQLite Storage Mechanism, or a MeticulouslyLoggedMySQL Storage Mechanism which is exactly like the MySQL one except it also logs every little thing it does to stdout. So when our application is misbehaving we can turn on logging while it's misbehaving and if those logs aren't enough we can at least sever the connection with the live database and start up a new node while we debug the old one and it thinks it's still doing some useful work."
The signature of this is being identified by this quasi-Law-of-Demeter as this "myApplication.getStorageMechanism().saveRecord(myRecord)" chaining. The problem is not the chaining itself; the idea would be just as wrong with the more verbose "StorageMechanism s = myApp.getStorageMechanism(); s.saveRecord(myRecord)" type of flow. The problem is just that this superpower is really quite powerful and YAGNI principles apply here: you probably don't need the ability for an application to hot-swap storage mechanisms this way.[2]
Bounded contexts[3] are kind of a red herring here, they are extremely handy but I would not apply the concept in this context.
1. FWIW this idea is being shamelessly stolen from Haskell where the conveyor belt model is an "Arrow" approach to computing and the idea that a machine can flow down a conveyor belt requires some extra structure, "ArrowApply", which is precisely equivalent to a Monad. So the quasi-law-of-Demeter actually says "avoid monads when possible", hah.
2. And of course you may run into an exception to it and that's fine, if you are aware of what you are doing.
3. Put simply a bounded context is the programming idea of "namespace" -- a space in which the same terms/words/names have a different meaning than in some other space -- applied to business-level speech. Domain-driven design is basically saying "the words that users use to describe the application, should also be the words that programmers use to describe it." So like in original-Twitter the posts to twitter were not called tweets, but now that this is the common name for them, DDD says "you should absolutely create a migration from the StatusUpdate table to the Tweet table, this will save you incalculable time in the long-run because a programmer may start to think of StatusUpdates as having some attributes which users don't associate with Tweets while users might think of Tweets as having other properties like 'the tweet I am replying to' which programmers don't think the StatusUpdates should have... and if you're not talking the same language then every single interaction consists of friction between you both." The reason we need bounded contexts is that your larger userbase might consist both of Accountants for whom a "Project" means ABC, and Managers for whom a "Project" means DEF, and if you try to jam those both into the Project table because they both have the same name you're gonna get hurt real bad. In turn, DDD suggests that once you can identify where those namespace boundaries seem to exist in your domain, those make good module boundaries, since modules are the namespaces of our software world. And then if say you're doing microservices, instead of pursuing say the "strong entity" level of ultra-fine granularity, "every term used in my domain deserves its own microservice," try coarser-graining it by module boundaries and bounded contexts, create a "mini-lith" rather than these "nanoservices" that each manage one term of the domain... so the wisdom goes.
I love how this is clearly a contextual recommendation. I'm not a software developer, but a data scientist. In pandas, to write your manipulations in this chained methods fashing is highly encouraged IMO. It's even called "pandorable" code
The latter example (without the redundant assignments) is preferred by people who do a lot of line-by-line debugging. While most IDEs allow you to set a breakpoint in the middle of an expression, that's still more complicated and error prone than setting one for a line.
I've been on a team that outlawed method chaining specifically because it was more difficult to debug. Even though I'm more of a method-chainer myself, I have taken to writing unchained code when I am working on a larger team.
...is undeniably easier to step-through debug than the chained version.
1 reply →
TBH, the only context where I've seen people express a strong preference for the non-chained option is under the charismatic influence of Uncle Bob's baleful stare.
Otherwise, it seems opinions typically vary between a strong preference for chaining, and rather aggressive feelings of ¯\_(ツ)_/¯
Every time that I see a builder pattern, I see a failure in adopting modern programming languages. Use named parameters, for f*cks sake!
I think following some ideas in the book, but ignoring others like the ones applicable for the law of demeter can be a recipe for a mess. The book is very opinionated, but if followed well I think it can produce pretty dead simple code. But at the same time, just like with any coding, experience plays massively into how well code is written. Code can be written well when using his methods or when ignoring his methods and it can be written badly when trying to follow some of his methods or when not using his methods at all.
>his treatment of the Law of Demeter, which, as I recall, was so shallow that he didn't even really even thoroughly explain what the law was trying to accomplish.
oof. I mean, yeah, at least explain what the main thing you’re talking about is about, right? This is a pet peeve.
wow this is a nightmare
> It's OK to duplicate code in places where the abstractions are far enough apart that the alternative is worse.
I don't recall where I picked up from, but the best advice I've heard on this is a "Rule of 3". You don't have a "pattern" to abstract until you reach (at least) three duplicates. ("Two is a coincidence, three is pattern. Coincidences happen all the time.") I've found it can be a useful rule of thumb to prevent "premature abstraction" (an understandable relative of "premature optimization"). It is surprising sometimes the things you find out about the abstraction only happen when you reach that third duplicate (variables or control flow decisions, for instance, that seem constants in two places for instance; or a higher level idea of why the code is duplicated that isn't clear from two very far apart points but is clearer when you can "triangulate" what their center is).
I don't hate the rule of 3. But i think it's missing the point.
You want to extract common code if it's the same now, and will always be the same in the future. If it's not going to be the same and you extract it, you now have the pain of making it do two things, or splitting. But if it is going to be the same and you don't extract it, you have the risk of only updating one copy, and then having the other copy do the wrong thing.
For example, i have a program where one component gets data and writes it to files of a certain format in a certain directory, and another component reads those files and processes the data. The code for deciding where the directory is, and what the columns in the files are, must be the same, otherwise the programs cannot do their job. Even though there are only two uses of that code, it makes sense to extract them.
Once you think about it this way, you see that extraction also serves a documentation function. It says that the two call sites of the shared code are related to each other in some fundamental way.
Taking this approach, i might even extract code that is only used once! In my example, if the files contain dates, or other structured data, then it makes sense to have the matching formatting and parsing functions extracted and placed right next to each other, to highlight the fact that they are intimately related.
> You want to extract common code if it's the same now, and will always be the same in the future.
I suppose I take that as a presumption before the Rule of 3 applies. I generally assume/take for granted that all "exact duplicates" that "will always be the same in future" are going to be a single shared function anyway. The duplication I'm concerned about when I think the Rule of 3 comes into play is the duplicated but diverging. ("I need to do this thing like X does it, but…")
If it's a simple divergence, you can add a flag sometimes, but the Rule of 3 suggests that sometimes duplicating it and diverging it that second time "is just fine" (avoiding potentially "flag soup") until you have a better handle on the pattern for why you are diverging it, what abstraction you might be missing in this code.
The rule of three is a guideline or principle, not a strict rule. There's nothing about it that misses the point. If, from your experience and judgement, the code can be reused, reuse it. Don't duplicate it (copy/paste or write it a second time). If, from your experience and judgement, it oughtn't be reused, but you later see that you were wrong, refactor.
In your example, it's about modularity. The directory logic makes sense as its own module. If you wrote the code that way from the start, and had already decoupled it from the writer, then reuse is obvious. But if the code were tightly coupled (embedded in some fashion) within the writer, than rewriting it would be the obvious step because reuse wouldn't be practical without refactoring. And unless you can see how to refactor it already, then writing it the second time (or third) can help you discover the actual structure you want/need.
As people become more experienced programmers, the good ones at least, already tend to use modular designs and keep things decoupled which promotes reuse versus copy/paste. In that case, the rule of three gets used less often by them because they have fewer occurrences of real duplication.
1 reply →
You have a point for extracting exact duplicates that you know will remain the same.
But the point of the rule of 3 remains. Humans do a horrible job of abstracting from one or two examples, and the act of encountering an abstraction makes code harder to understand.
> “premature abstraction”
Also known as, “AbstractMagicObjectFactoryBuilderImpl” that builds exactly one (1) factory type that generates exactly (1) object type with no more than 2 options passed into the builder and 0 options passed into the factory. :-)
The Go proverb is "A little copying is better than a little dependency." Also don't deduplicate 'text' because it's the same, deduplicate implementations if they match in both mechanism 'what' it does as well as their semantic usage. Sometimes the same thing is done with different intents which can naturally diverge and the premature deduplication is debt.
I'm coming to think that the rule of three is important within a fairly constrained context, but that other principle is worthwhile when you're working across contexts.
For example, when I did work at a microservices shop, I was deeply dissatisfied with the way the shared utility library influenced our code. A lot of what was in there was fairly throw-away and would not have been difficult to copy/paste, even to four or more different locations. And the shared nature of the library meant that any change to it was quite expensive. Technically, maybe, but, more importantly, socially. Any change to some corner of the library needed to be negotiated with every other team that was using that part of the library. The risk of the discussion spiraling away into an interminable series of bikesheddy meetings was always hiding in the shadows. So, if it was possible to leave the library function unchanged and get what you needed with a hack, teams tended to choose the hack. The effects of this phenomenon accumulated, over time, to create quite a mess.
An old senior colleague of mine used to insist that if i added a script to the project, i had to document it on the wiki. So i just didn't add my scripts to the project.
1 reply →
I'd argue if the code was "fairly throw-away" it probably did not meet the "Rule of 3" by the time it was included in the shared library in the first place.
> I don't recall where I picked up from, but the best advice I've heard on this is a "Rule of 3"
I know this as AHA = Avoid Hasty Abstractions:
- https://kentcdodds.com/blog/aha-programming
- https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction
> It's OK to duplicate code in places where the abstractions are far enough apart that the alternative is worse.
Something I’ve mentioned to my direct reports during code reviews: Sometimes, code is duplicated because it just so happens to do something similar.
However, these are independent widgets and changes to one should not affect the other; in other words, not suitable for abstraction.
This type of reasoning requires understanding the problem domain (i.e., use-case and the business functionality ).
I've gone through java code where i need to open 15 different files, with one lined pieces of code, just to find out it's a "hello world" class.
I like abstraction as much as the next guy but this is closer to obfuscation than abstraction.
At a previous company, there was a Clean Code OOP zealot. I heard him discussing with another colleague about the need to split up a function because it was too long (it was 10 lines). I said, from the sidelines, "yes, because nothing enhances readability like splitting a 10 line function into 10, 1-line functions". He didn't realize I was being sarcastic and nodded in agreement that it would be much better that way.
Spaghetti code is bad, but lasagna code is just as bad IMO.
There seems to be a lot of overlap between the Clean Coders and the Neo Coders [0]. I wish we could get rid of both.
[0] People who strive for "The One" architecture that will allow any change no matter what. Seriously, abstraction out the wazoo!
Honestly. If you're getting data from a bar code scanner and you think, "we should handle the case where we get data from a hot air balloon!" because ... what if?, you should retire.
I like to say "the machine that does everything, does nothing".
The problem is that `partial` in C# should never even have been considered as a "solution" to write small, maintainable classes. AFAIK partial was introduced for code-behind files, not to structure human written code.
Anyways, you are not alone with that experience - a common mistake I see, no matter what language or framework, is that people fall for the fallacy "separation into files" is the same as "separation of concerns".
Seriously? That's an abuse of partial and just a way of following the rules without actually following them. That code must have been fun to navigate...
Many years ago I worked on a project that had a hard “no hard coded values” rule, as requested by the customer. The team routinely wrote the equivalent to
And I couldn’t get my manager to understand why this was a problem.
Clearly it is still a hardcoded value! It fails the requirement. Instead there should be a factory that loads a file that reads in the "a" to the variable, nested down under 6 layers of abstraction spread across a dozen files.
6 replies →
Saw this just the other day. I was at a loss to know what to say. :(
This gets to intent.
What, in the code base, does char_a mean? Is it append_command? Is it used in a..z for all lowercase? Maybe an all_lowercase is needed instead.
I know that it's an "A". I don't know why that matters to the codebase. Now, there are cases where it is obvious to even beginners, and I'm fine with it as magic characters, but I've seen cases where people were confused to why 1000 was in a `for(int i = 0; i<1000; i++)`. Is 1000 arbitrary in that case, or is it based on a defunct business requirement in 2008? Will it break if we change it to 5000 because our computers are faster now?
Can I please forward your contact info to my developer? Maybe you can do a better job convincing him haha ;)
> where I have to look through 10-15 files to see the full class
The Magento 2 codebase is a good example of this. It's both well written and horrible at the same time. Everything is so spread out into constituent technical components, that the code loses the "narrative" of what's going on.
I started OOP in '96 and I was never able to wrap my head around the code these "Clean Code" zealots produced.
Case in point: Bob Martin's "Video Store" example.
My best guess is that clean code, to them, was as little code on the screen as possible, not necessarily "intention revealing code either", instead everything is abstracted until it looks like it does nothing.
I have had the experience of trying to understand how a feature in a C++ project worked (both Audacity and Aegisub I think) only to find that I actually could not find where anything was implemented, because everything was just a glue that called another piece of glue.
Also sat in their IRC channel for months and the lead developer was constantly discussing how he'd refactor it to be cleaner but never seemed to add code that did something.
SOLID code is a very misleading name for a technique that seems to shred the code into confetti.
I personally don't feel all that productive spending like half my time just navigating the code rather than actually reading it, but maybe it's just me.
> I personally don't feel all that productive spending like half my time just navigating the code rather than actually reading it
Had this problem at a previous job - main dev before I got there was extremely into the whole "small methods" cult and you'd regularly need to open 5-10 files just to track down what something did. Made life - and the code - a nightmare.
People to people dealt this fate ...
What is mostly surprising I find most of developers are trying to obey the "rules". Code containing even minuscule duplication must be DRYied, everyone agrees that code must be clean and professional.
Yet it is never enough, bugs are showing up and stuff that was written by others is always bad.
I start thinking that 'Uncle Bob' and 'Clean code' zealots are actually harmful, because it prevents people from taking two steps back and thinking about what they are doing. Making microservices/components/classes/functions that end up never reused and making DRY holy grail.
Personally I am YAGNI > DRY and a lot of times you are not going to need small functions or magic abstractions.
I think the problem is not the book itself, but people thinking that all the rules apply to all the code, al the time. A length restriction is interesting because it makes you think if maybe you should spit your function into more than one, as you might be doing too much in one place. Now, if splitting will make things worse, then just don't.
In C# and .NET specifically, we find ourselves having a plethora of services when they are "human-readable" and short.
A service has 3 "helper" services it calls, which may, in turn have helper services, or worse, depend on a shared repo project.
The only solution I have found is to move these helpers into their own project, and mark the helpers as internal. This achieves 2 things:
1. The "sub-services" are not confused as stand-alone and only the "main/parent" service can be called. 2. The "module" can now be deployed independently if micro-services ever become a necessity.
I would like feedback on this approach. I do honestly thing files over 100 lines long are unreadable trash, and we have achieved a lot be re-using modular services.
We are 1.5 years into a project and our code re-use is sky-rocketing, which allows us to keep errors low.
Of course, a lot of dependencies also make testing difficult, but allow easier mocks if there are no globals.
>I would like feedback on this approach. I do honestly thing files over 100 lines long are unreadable trash
Dunno if this is the feedback you are after, but I would try to not be such an absolutist. There is no reason that a great 100 line long file becomes unreadable trash if you add one line.
I mean feedback on a way to abstract away helper services. As far as file length, I realize that this is subjective, and the 100 lines number is pulled out of thing air, but extremely long files are generally difficult to read and context gets lost.
1 reply →
Partial classes is an ugly hack to mix human and machine generated source code. IMHO it should be avoided
> I've had developers use the 'partial' feature in C# to meet Martin's length restrictions
That is not the fault of this book or any book. The problem is people treating the guidelines as rituals instead of understanding their purpose.
What do you say to convince someone? It’s tricky to review a large carefully abstracted PR that introduces a bunch of new logic and config with something like: “just copy paste lol”
... and here I was thinking I was alone!
Sometimes you really just do need a 500 line function.
Yes, it's the first working implementation before good boundaries are not yet known. After a while it becomes familiar and natural conceptual boundaries arise that leads to 'factoring' and shouldn't require 'refactoring' because you prematurely guessed the wrong boundaries.
I'm all for the 100-200 line working version--can't say I've had a 500. I did once have a single SQL query that was about 2 full pages pushing the limits of DB2 (needed multiple PTFs just to execute it)--the size was largely from heuristic scope reductions. In the end, it did something in about 3 minutes that had no previous solution.
Nah mate, you never do. Nor 500 1-liners.