Comment by bob1029

3 years ago

> 1 million IOPS on a NoSQL database

I have gone well beyond this figure by doing clever tricks in software and batching multiple transactions into IO blocks where feasible. If your average transaction is substantially smaller than the IO block size, then you are probably leaving a lot of throughput on the table.

The point I am trying to make is that even if you think "One Big Server" might have issues down the road, there are always some optimizations that can be made. Have some faith in the vertical.

This path has worked out really well for us over the last ~decade. New employees can pick things up much more quickly when you don't have to show them the equivalent of a nuclear reactor CAD drawing to get started.

> batching multiple transactions into IO blocks where feasible. If your average transaction is substantially smaller than the IO block size, then you are probably leaving a lot of throughput on the table.

Could you expand on this? A quick Google search didn't help. Link to an article or a brief explanation would be nice!

  • Sure. If you are using some micro-batched event processing abstraction, such as the LMAX Disruptor, you have an opportunity to take small batches of transactions and process them as a single unit to disk.

    For event sourcing applications, multiple transactions can be coalesced into a single IO block & operation without much drama using this technique.

    Surprisingly, this technique also lowers the amount of latency that any given user should experience, despite the fact that you are "blocking" multiple users to take advantage of small batching effects.