Comment by dwoldrich
13 hours ago
I needed to build an internal admin console, not super-scalable, just a handful of business users to start. The SQL database it would access was on-premises, but might move to the cloud in future. Authorized users needed single sign-on to their Azure-based active directory accounts for login. I wanted to do tracing of user requests with OpenTelemetry or something like.
At this point in my career, why wouldn't I reach for microservices to supply the endpoints that my frontend calls out to? Microservices are straightforward to implement with NodeJS (or any other language, for that matter.) I get very straightforward tracing and Azure SSO support in NodeJS. For my admin console, I figured I would need one backend-for-frontend microservice that the frontend would connect to and a domain service for each domain that needed to be represented (with only one domain to start). We picked server technologies and frameworks that could easily port to the cloud.
So two microservices to implement a secure admin console from scratch, is that too many? I guess I lack the imagination to do the project differently. I do enjoy the "API First" approach and the way it lets me engage meaningfully with the business folks to come up with a design before we write any code. I like how it's easy to unit/functional test with microservices, very tidy.
Perhaps what makes a lot/most of microservice development so gross is misguided architectural and deployment goals. Like, having a server/cluster per deployed service is insane. I deploy all of my services monolithically until a service has some unique security or scaling needs that require it to separate from the others.
Similarly, it seems common for microservices teams to keep multiple git repos, one for each service. Why?! Some strange separation-of-concerns/purity ideals. Code reuse, testing, pull requests, and atomic releases suffer needless friction unless everything is kept in a monorepo, as the OP implied.
Also, when teams build microservices in such a way that services must call other services completely misses the point of services - that's just creating a distributed monolith (slow!)
I made a rule on my team that the only service type that can call another service is aggregation services like my backend-for-frontend which could launch downstream calls in parallel and aggregate the results for the caller. This made the architecture very flat with the minimum number of network hops and with as much parallelism as possible so it would stay performant. Domain services owned their data sources, no drama with backend data.
I see a lot of distributed monolith drama and abuse of NoSQL data sources giving microservices a bad reputation.
No comments yet
Contribute on Hacker News ↗