← Back to context

Comment by andersmurphy

1 month ago

A single machine can handle much, much more if you use sqlite and batch updates/inserts.

Honestly, unless you're bandwidth/uplink limited (e.g running a CDN) then a single machine will take you really far.

Also simpler systems tend to have better uptime/reliability. Doesn't get much simpler than a single box.

On my pretty modest dev machine with 12 CPUs, I once managed to achieve 14k RPS with Go+SQLite in a write+read test on a real project I was developing (it used a framework so there was also some overhead due to all the abstractions). I didn't even batch anything. The only problem was, I quickly found that SQLite's WAL checkpointer couldn't keep up with the write rates, the WAL file quickly grew to 100s of GBs (this is actually a known issue and is mentioned in their docs), so I had to add a special goroutine to monitor the size of the WAL file and force checkpointing manually when it got too big.

So when people say 1k is "highload" and requires a whole cluster, I'm not sure what to think of it. You can squeeze so much more out of a single fairly modest machine.

  • Sqlite has some sharp edges for sure honestly even basic batching all inserts/updates in a transaction every 100ms will get you to 30000+ updates a second on a 4 core shared CPU VPS (assuming nvme drives).

    That's the other thing AWS tends to have really dated SSDs.

    Honestly, it's like the industry has jumped the shark. 1k is not a lot of load. It's like when people say single writer means you can't be performant, it's the opposite most of the time single writer lets you batch and batching is where the magic happens.