← Back to context

Comment by notacoward

3 years ago

At various points in my career, I worked on Very Big Machines and on Swarms Of Tiny Machines (relative to the technology of their respective times). Both kind of sucked. Different reasons, but sucked nonetheless. I've come to believe that the best approach is generally somewhere in the middle - enough servers to ensure a sufficient level of protection against failure, but no more to minimize coordination costs and data movement. Even then there are exceptions. The key is don't run blindly toward the extremes. Your utility function is probably bell shaped, so you need to build at least a rudimentary model to explore the problem space and find the right balance.

Yes, totally.

Among the setups the one that I think is the golden is BIG Db Server, 1-4 front-end(web/api/cache) servers. Off-hand the backups and CDN.

That is.