Comment by danShumway
5 years ago
Huh. There's something to this.
I've often wondered why certain people feel so attached to static typing when in my experience it's rarely the primary source of bugs in any of the codebases I work with.
But it's true, I do generally feel like a codebase that's so complex or fractured that no one can understand any sizable chunk of it is just already going to be a disaster regardless of what kind of typing it uses. I don't hate microservices, they're often the right decision, but I feel they're almost always more complicated than a monolith would be. And I do regularly end up just reading implementation code, even in 3rd-party libraries that I use. In fact in some libraries, sometimes reading the source is quicker and more reliable than trying to find the relevant documentation.
I wouldn't extrapolate too much based on that, but it's interesting to hear someone make those connections.
I'll add my voice to your parent.
Statically typed languages and languages that force you to be explicit are awesome for going into a codebase you have never seen and understanding things. You can literally just let your IDE show you everything. All questions you have are just one Ctrl-click away and if proper abstraction (ala Clean Code) has been used you can ignore large swaths of code entirely and just look at what you need. Naming is awesome and my current and previous code bases were both really good in this (both were/are mixes of monolith and microservices). I never really care where a file is located. I know quite a few coders that will want to find things via the folder tree. I just use the keyboard shortcut to open by name and start guessing. Usually first or second guess finds what I need because things are named well and consistently.
Because we use proper abstractions I can usually see at first glance what the overall logic is. If I need to know how a specific part works in detail I can easily drill down via Ctrl-click. With a large inlined blob of code I would have a really hard time. Do I skip from line 1356 to 1781 or is that too far? Oh this is JavaScript and I don't even know if this variable here is a string or a number or both depending on where in the code we are or maybe it's an object that's used as a map?
The whole thing is too big to keep in my head all the time and I will probably not need to touch the same piece of code over and over and instead I will move from one corner to the next and again to another corner over the course of a few weeks to months.
That's why our Frontend code is being converted to TypeScript and our naming (and other) conventions make even our javascript code bearable.
Is your backend Java or C#? Your IDE description feels like Java w/ Eclipse or IntelliJ or C# w/ Visual Studio. I have similar experience to you. The "discoverability" of a large codebase is greatly increased by combining language with tooling. If you use Java with Maven-like dependency management (you can use Gradle these days if 'alergic' to Maven's pom.xml), the IDE will usually automatically download and "hook-up" source code. It is ridiculous how fast you can move between layers of (a) project code, (b) in-house libraries, (c) open source libraries, and (d) commercial closed-source libraries (decompile on the fly in 2021!). (I assume all the same can be done for C# w/ Visual Studio.)
To be fair, when I started my career, I worked on a massive C project that was pretty easy to navigate because it was a mono-repo with everything in one place. CTags could index 99% of what you needed, and the macros weren't out of control. (Part of the project was also C++, but written in the style of career C programmers who only wanted namespaces and trivial generics like vector and map! Again, very simple to navigate huge codebase.)
I'm still surprised in 2021 when someone asks me to move a Java class to a different package during a code review. My inner monologue says: "Really... do they still use a file browser? Just use the IDE to find it!"
> I've often wondered why certain people feel so attached to static typing when in my experience it's rarely the primary source of bugs in any of the codebases I work with.
That's precisely why people are attached to it; because it's rarely a source of bugs. :-)
Ha! Good catch. :)
[ separate answer for microservices ]
Yeah, monoliths are frequently easier to reason about, simply because you have fewer entities. The big win of microservices (IMHO) isn't "reason about", it is that they are a good way of getting more performance out of your total system IFF various parts of the system have different scaling characteristics.
If your monolith is composed of a bunch of things, where most things require resources (CPU/RAM/time) on an O(n) (for n being the number of active requests), but one or a few parts may be O(n log n). Or be O(n), but with a higher constant...
Then, those "uses more resources" is the limit of scaling for each instance of the monolith, and you need to deploy more monoliths to cope with a larger load.
On the other hand, in a microservice architecture, you can deply more instances of just the microservices that need it. This can, in total, lead to more thinsg being done, with in total less resources.
But, that also requires you to have your microservices cut out in suitable sizes, which requires you to at one point have understood the system well enough to cut them apart.
And that, in turn, may lead to better barriers between microservices, meaning that each microservice MAY be easier to understand in isolation.
> But, that also requires you to have your microservices cut out in suitable sizes, which requires you to at one point have understood the system well enough to cut them apart.
Sure, but that’s not particularly hard; it’s been basic system analysis since before “microservices” or even “service-oriented architecture” was a thing. Basic 70s-era Yourdon-style structured analysis (which, while its not the 1970s approach, can be applied incrementally in a story-by-story agile fashion to build up a system as well as doing either big upfront design or working from the physical design to the logical requirements of an existing system) produces pretty much exactly what you need to determine service boundaries.
(It’s also a process that very heavily leverages locality of knowledge within processes and flows, so its quite straightforward to carry out without ever having to hold the whole system in your head.)
Yep, there's no real magic here. There's some understanding forced by a (successful) transition to microservices ,but a transition to microservices is not a requirement for said gained insight.
And if all parts of your system scale identically, it may be better to scale it by replicating monoliths.
Another POSSIBLE win is if you start having multiple systems, sharing the same component (say, authentication and/or authorization), at which point there's something to be said for breaking at least that bit out of every monolith and putting them in a single place.
I don't really care about the static/dynamic typing spectrum, I care about the strong/weak typing spectrum.
At any point, will the code interpret a data item according to the type it was created with?
A prime example of "weakly typed" is when you can add "12" and 34 to get either "1234" or 46.
This is an interesting distinction. I confess that I frequently interchange the pairs.
I mean, to some respect, "dynamic typing" is "type the data" and "static typing" is "type the variable".
In both cases, there's the possibility for doing type propagation. But, if you somehow manage to pass in two floats to an addition that a C compiler thinks is an integer addition, you WILL have a bad day. Whereas in Common Lisp, the actual passed-in values are typed (for floats, usually boxed, for integers, if they're fixnums, usually tagged and having a few bits less than you would expect).