← Back to context

Comment by 3pt14159

3 years ago

Yeah I think single machine has its place, and I once sped up a program by 10000x by just converting it to Cython and having it all fit in the CPU cache, but the cloud still does have a place! Even for non-bursty loads. Even for loads that theoretically could fit in a single big server.

Uptime.

Or are you going to go down as all your workers finish? Long connections? Etc.

It is way easier to gradually handover across multiple API servers as you do an upgrade than it is to figure out what to do with a single beefy machine.

I'm not saying it is always worth it, but I don't even think about the API servers when a deploy happens anymore.

Furthermore if you build your whole stack this way it will be non-distributed by default code. Easy to transition for some things, hell for others. Some access patterns or algorithms are fine when everything is in a CPU cache or memory but would fall over completely across multiple machines. Part of the nice part about starting with cloud first is that it is generally easier to scale to billions of people afterwards.

That said, I think the original article makes a nuanced case with several great points and I think your highlighting of the Twitter example is a good showcase for where single machine makes sense.