← Back to context

Comment by kgeist

3 years ago

One of our projects uses 1 big server and indeed, everyone started putting everything on it (because it's powerful): the project itself, a bunch of corporate sites, a code review tool, and god knows what else. Last week we started having issues with the projects going down because something is overloading the system and they still can't find out what exactly without stopping services/moving them to a different machine (fortunately, it's internal corporate stuff, not user-facing systems). The main problem I've found with this setup is that random stuff can accumulate with time and then one tool/process/project/service going out of control can bring down the whole machine. If it's N small machines, there's greater isolation.

I believe that the "one big server" is intended for an application rather than trying to run 500 applications.

Does your application run on a single server? If yes. Don't use a distributed system for it's architecture or design. Simply buy bigger hardware when necessary. Because the top end of servers are insanely big and fast.

It does not mean, IMHO, throw everything on a single system without suitable organization, oversight, isolation, and recovery plans.