← Back to context

Comment by dxdm

3 days ago

> All of this arises from your failure to question this basic assumption though, doesn't it?

Haha, no. "All of this" is a scenario I consider quite realistic in terms of what needs to happen. The question is, how should you split this up, if at all?

Mind that these concerns will be involved in other ways with other requests, serving customers and internal users. There are enough different concerns at different levels of abstraction that you might need different domain experts to develop and maintain them, maybe using different programming languages, depending on who you can get. There will definitely be multiple teams. It may be beneficial to deploy and scale some functions independently; they have different load and availability requirements.

Of course you can slice things differently. Which assumptions have you questioned recently? I think you've been given some material. No need to be rude.

I don't think I was rude. You're overcomplicating the architecture here for no good reason. It might be common to do so, but that doesn't make it good practice. And ultimately I think it's your job as a professional to question it, which makes not doing so a form of 'failure'. Sorry if that seems harsh; I'm sharing what I believe to be genuine and valuable wisdom.

Happy to discuss why you think this is all necessary. Open to questioning assumptions of my own too, if you have specifics.

As it is, you're just quoting microservices dogma. Your auth service doesn't need a different programming language from your invoicing system. Nor does it need to be scaled independently. Why would it?

  • Diagnosing "failure" in other people is indeed rude, even if you privately consider it true and an appropriate characterization. It's worse if you do that after jumping to the conclusion that somebody else has not considered something, because they have a different opinion than you. At least that's my conclusion of why you wrote that. (And this paragraph is my return offering of genuine and valuable wisdom.)

    Of course you can keep everything together, in just very few large parts, or even a monolith. I've not said otherwise.

    My point is that "architecture" is orthogonal to the question of "monolith vs separate services"; the difference there is not architecture, but in cohesion and flexibility.

    If you do things right, even inside a monolith you will have things clearly separated into different concerns, with clean interfaces. There are natural service boundaries in your code. (If there aren't, in a system like this, you and the business are in for a world of pain.)

    The idea is that you can put network IO between these service boundaries, to trade off cohesion and speed at these boundaries for flexibility between them, which can make the system easier to work with.

    Different parts of your system will have different requirements, in terms of criticality, performance and availability; some need more compute, others do more IO, are busy at different times, talk to different special or less special databases. This means they may have different sweet spots for various trade-offs when developing and running them.

    For example, you can (can!) use different languages to implement critical components or less critical ones, which gives you a bigger pool to hire competent developers from; competent as developers, but also in the respective business domain. This can help your company off the ground.

    (Your IoT and bike people are comfortable in Rust. Payments is doing Python, because they're used to waiting, and also they are the people you found who actually know not to use floats for money and all the other secrets.)

    You can scale up one part of your system that needs fast compute without also paying for the part that needs a lot of memory, or some parts of your service can run on cheap spot instaces, while others benefit from a more stable environment.

    You can deploy your BI service without taking down everything when the new initialization code starts crash-looping.

    (You recover quickly, but in the meantime a lot of your IoT boxes got lonely are now trying to reconnect, which triggers a stampede on your monolith, you need to scale up quickly to keep the important functions running, but the invoicing code fetches a WDSL file from a slow government SOAP service, which is now down, and your cache entry's TTL expired, and you don't even need more invoicing right now... The point is, you have a big system, things happen, and fault lines between components are useful.)

    It's a trade-off, in the end.

    Do you need 15 services? You already have them. They're not even "micro", just each minding their own part of the business domain. But do they all need their own self-contained server? Probably not, but you might be better off with more than just one single monolith.

    But I would not automatically bat an eye to find that somebody separated these whatever-teen services. I don't see that as a grievous error per se, but potentially as the result of valid decisions and trade-offs. The real job is to properly separate these concerns, whether they then live in a monolith or not.

    And that's why that request may well touch so many services.