Comment by hedora
3 hours ago
Not the person you are responding to, but sqlite is single threaded (even in multi process, you get one write transaction at a time).
So, if you have a network server that does BEGIN TRANSACTION (process 1000 requests) COMMIT (send 1000 acks to clients), with sqlite, your rollback rate from conflicts will be zero.
For PG with multiple clients, it’ll tend to 100% rollbacks if the transactions can conflict at all.
You could configure PG to only allow one network connection at a time, and get a similar effect, but then you’re paying for MVCC, and a bunch of other stuff that you don’t need.
In your example, clients can't have their own transactions? You commit/rollback all requests for all 1000 clients together?
Sqlite supports nested transactions with SAVEPOINT so each client can have their own logical transaction that can be rolled back. The outer transaction just batches the fsync effectively. So an individual client failing a transaction doesn't cause the batch to fail. But, a crash would cause the batch to fail. Because, it's a single writer, there's no rollback/retries from contention/MVCC.
You could try to imitate this in postgresql but the problem is the outer transaction does not eliminate the network hops for each inner/client transaction so you don't gain anything doing it and you still have the contention problem which will cause rollbacks/retries. You could reduce your number of connections to one to eliminate contention. But, then you are just playing sqlite's game.
so, in sqlite you need to write some app code to batch transactions in the app, so it has non-trivial development and maintenance cost.