← Back to context

Comment by DrScientist

14 hours ago

Isn't it simple as the following?

Break your code into modules/components that have a defined interface between them. That interface only passes data - not code with behaviour - and signal the method calls may fail to complete ( ie throw exceptions ).

ie the interface could be a network call in the future.

Allow easy swapping of interface implementations by passing them into constructors/ using factories or dependency injection frameworks if you must.

That's it - you can then start with everything in-process and the rapid development that allows, but if you need to you can add splitting into networked microservices - any complexity that arises from the network aspect is hidden behind the proxy, with the ultimate escape hatch of the exception.

Have I missed something?

You're not missing something, but you're assuming that it's easy to know ahead of time where the module boundaries should be and what the interfaces should look like. This is very far from easy, if possible at all (eg google "abstraction boundaries are optimization boundaries").

Also, most of these interfaces you'll likely never need. It's a cost of initial development, and the indirection is a cost on maintainability of your code. It's probably (although not certainly) cheaper to refactor to introduce interfaces as needed, rather than always anticipate a need that might never come.

You're not missing much, but I don't understand why you're just basically repeating what the article already says. Except the article also says to use a monorepo.

  • No, I'm saying you don't need to use a monorepo! The repo discussion is a bit orthogonal, and up to you to decide whether you want a single repo or multiple repos with modules/libraries that get deployed together.

  • I think I've added a couple of elements to make it possible to scale your auth service if you need to. Easily swappable implementations and making sure the interfaces advertise that calls may simply fail.

    Even so it's still very simple.

    To scale your auth service you just write a proxy to a remote implementation and pass that in - any load balancing etc is hidden behind that same interface and none of the rest of the code cares.

    • Good point! Sorry if I was being ungenerous.

      I like the idea of the remote implementation being proxied -- not sure I've come across that pattern before.

the swap from interface to network call is still non-trivial.

you get to have new problems that are qualitatively different from before like timeouts, which can break the adsumptions in the rest of your code about say, whether state was updated or not, and in what order. you also then get to deal with thundering herds and circuit breakers and so on.

Yes, you are missing the cost of complexity and network calls. You are describing a distributed monolith. It does not help.