← Back to context

Comment by spyckie2

1 year ago

Yes, this is the sensible and necessary side of microservices...

Now, take your auth logic and put it on a 3rd party, rewriting all of your auth to do so.

Now, make your database shared across multiple distribution platforms and 12 services (aws, cloud, heroku, tableau).

When one of your 15 services goes offline for temporary maintenance, for some reason your entire website goes down.

The 17th service you created has an ip address switch and is missing and the response to all urls is the default apache gateway page.

The 24th service upgraded from Node 12 and is now broken, while the 26th service built in Go doesn't compile on the specific linux variant of one of your devs.

Before you know it, you're just doing maintenance work because something is broken and it isn't your code, it's some random downtime or brittleness that is inherent in microservice architecture.

What you describe is common "management of complexity", or, really, lack thereof.

These problems are independent of "microservices" vs "monolith". They are independent of "using a framework" vs "no framework". They are independent of programming-language or hosting infra.

Managing complexity, in itself, is a daunting task. It's hard in a monolith, it's hard in microservices. Building a tangled big ball of spaghetti is rather common in e.g. Rails - it takes a lot of experience, discipline and dedication to avoid it.

Languages (type systems, checkers, primitives), frameworks, hosting infra, design patterns, architectures, all of these are tools to help manage the complexity. But it still starts with a dedication to manage it today, and still be able to do so in a decade.

Microservices don't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "services". Just as a monolith doesn't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "modules".