Comment by DrScientist

4 months ago

Isn't it simple as the following?

Break your code into modules/components that have a defined interface between them. That interface only passes data - not code with behaviour - and signal the method calls may fail to complete ( ie throw exceptions ).

ie the interface could be a network call in the future.

Allow easy swapping of interface implementations by passing them into constructors/ using factories or dependency injection frameworks if you must.

That's it - you can then start with everything in-process and the rapid development that allows, but if you need to you can add splitting into networked microservices - any complexity that arises from the network aspect is hidden behind the proxy, with the ultimate escape hatch of the exception.

Have I missed something?

You're not missing something, but you're assuming that it's easy to know ahead of time where the module boundaries should be and what the interfaces should look like. This is very far from easy, if possible at all (eg google "abstraction boundaries are optimization boundaries").

Also, most of these interfaces you'll likely never need. It's a cost of initial development, and the indirection is a cost on maintainability of your code. It's probably (although not certainly) cheaper to refactor to introduce interfaces as needed, rather than always anticipate a need that might never come.

  • I think it is more intuitive if we think of side-effects. By specifying the interface you are explicitly defining inputs and outputs. If you want to add this later, it can be very difficult to make sure you can find all the side-effects. The whole point of the interface is to explicitly limit those side-effects and extra inputs outputs to happen, so it makes sense to define in advance.

You're not missing much, but I don't understand why you're just basically repeating what the article already says. Except the article also says to use a monorepo.

  • No, I'm saying you don't need to use a monorepo! The repo discussion is a bit orthogonal, and up to you to decide whether you want a single repo or multiple repos with modules/libraries that get deployed together.

  • I think I've added a couple of elements to make it possible to scale your auth service if you need to. Easily swappable implementations and making sure the interfaces advertise that calls may simply fail.

    Even so it's still very simple.

    To scale your auth service you just write a proxy to a remote implementation and pass that in - any load balancing etc is hidden behind that same interface and none of the rest of the code cares.

    • Good point! Sorry if I was being ungenerous.

      I like the idea of the remote implementation being proxied -- not sure I've come across that pattern before.

the swap from interface to network call is still non-trivial.

you get to have new problems that are qualitatively different from before like timeouts, which can break the adsumptions in the rest of your code about say, whether state was updated or not, and in what order. you also then get to deal with thundering herds and circuit breakers and so on.

  • Sure is more complex - but as I said key thing is to define those interfaces in a way that can be networked - you are just passing data not behaviour and the calls could fail to complete.

    In terms of timing the call is synchronous and either succeeds or fails - the details like timeouts/ asynch underhood etc are hidden by the proxy - in the end the call succeeds or fails and if you surface that as a synchronous call you hide the underlying complexity from the caller.

    A bit like opening a file and writing to it - most platform apis throw exceptions - and your code has to deal with it.

Yes, you are missing the cost of complexity and network calls. You are describing a distributed monolith. It does not help.

  • Not sure I understand. What is a distributed monolith?

    I'm not suggesting that the distributed bit is still coupled behind the scenes ( ie via a data backend that requires distributed transactions ) - the interaction is through the interface.

    In the end you are always going to have code calling code - the key point is to assume these key calls are simply data passing, not behaviour passing, and that they can fail.

    What else is need to make something network friendly? ( I'm suggesting that things like retries, load-balancing etc can be hidden as a detail in the network implementation - all you need to surface is succeed or fail ).