Comment by keyle
2 years ago
Before you even need to consider postgres, you can batch your writes to sqlite!
I am no sqlite fanboy, although I might be, but I found the industry seems to run to postgres for just about anything. I prefer simplicity first.
How it that simplicity?
To batch updates makes the code far more complex.
To install any full strength DB is trivial.
I don't get the 'simplicity'?
That depends entirely on what you're doing. If your workload is heavily transactional, then sure, that might add complexity.
The simplicity is not having a separate process that can fail, and that requires fail over, and monitoring.
You're misunderstanding. No-one mentioned transactions, that's not what we're talking about.
We're talking about concurrent writes. We're talking about batching inserts. Because SQLite can't handle a high throughput, so you batch a bunch of inserts together.
If the only way to get performance is to batch inserts, then you've got to write a whole load of manual queue code to queue up X number of inserts to insert them all at once.
Worse still, if your server crashes, bug, etc. you've just lost all those inserts. But you've already responded with 201s! So if you want any sort of guarantee, you've got to write even more code to cache them on disk or redis or something.
You're basically re-implementing features of postgre, badly, to make up for SQLite's deficiencies,
It really doesn't matter HOW you do it, it's the fact you have to do it at all. It's not simpler, it's more complicated. Installing/using a fully fledged DB is trivial these days.