Comment by abbbi
2 years ago
another article praising sqlite in production and working around a known sqlite limitation: concurrent writes. As they link in the article:
> But, my favorite feature of the gem is its improved concurrency support.
> [..] https://fractaledmind.github.io/2023/12/11/sqlite-on-rails-i...
Really. At the point you are experiencing database locked in your productive app that uses sqlite as backend, i would strongly suggest to use another database backend that was designed with concurrent writes in mind.
Nothing against sqlite in production, its nice, as long as your workload meets its feature set.
What both articles are saying is that concurrent writes under a certain threshold or heavy read-only activity is perfectly acceptable usage pattern for sqlite in production.
...the problem here is that historically rails (and others) use deferred transactions and that causes sqlite to fail even under trivial load conditions without simply waiting for the db to free for the next write, because people who've written the drivers don't understand how to use sqlite.
If you use sqlite correctly under heavy load it's slow not unreliable.
> as long as your workload meets its feature set.
Sure... but to be fair that probably covers a lot of microservices and probably a lot of apps too.
Obviously as you scale, it won't, so sure, it's a limited use case... but, heck, I've seen dozens of microservices each with their 'own database' (ie. read same RDS, with different database instances) that all fail at once when that instance goes down. Woops.
Better? Worse?
Hm. Sqlite is insanely reliable. I like isolated reliable services.
It's not for everything, but nothing is... I think it's suitable for more use cases than you're giving it credit for.
What's wrong with PRAGMA journal_mode=WAL? It enables concurrent writes, at the expense of disk. Which to my understanding is the same as any other database backend with write ahead logging to enable concurrent writers.
Anecdotally, I've seen the WAL file grow way too large even after all writes have finished and should shrink, but that's manageable.
WAL allows one writer concurrent with any number of readers. It does not allow two concurrent writers.
WAL does not help with "database locked" situations. At some point you will see them even with WAL enabled, and your application frontend code has to deal with the timeout and retry or whatever.
You just need to set busy_timeout > 0, and you'll basically never have database locked situations. It's just unfortunate, that it's not activated per default: https://sqlite.org/forum/info/7e456bf5544ab128
3 replies →
I'm relatively new to the DB field, and have only really used SQLite; but it seems obvious to me that if you're doing a transaction, particularly a complicated one, then you should expect your transaction to fail occasionally due to concurrent changes, and be executing your transaction in a loop. This should be true for any database.
If you loop-retry all transactions which fail due to transitory effects, then you won't have a problem with "database locked" situations.
Abysmally documented, but this is what I use for golang + sqlite:
https://pkg.go.dev/gitlab.com/martyros/sqlutil@v0.0.0-202312...
EDIT: Typo
In my experience most SQLite writes take less than 1ms.
Do your writes really need to be concurrent if they run that fast?
Hard to get upset about waiting for the current write to complete before you get your turn when we are talking delays measured in thousandths of a second.
If you have more than 1000 writes per second then maybe this is something to worry about. The solution there is probably to run a slightly more powerful server!
back in the days where we hit this issue (mostly on windows systems) i used to create a little stress tool, you would be surprised how fast you reach the database-locked state.
ive just put it here: https://github.com/abbbi/sqlitestress
maybe its useful for some people to simulate their workloads.
And once its locked do you have to hold all operations for X ms?
Tested it repeatedly on Hetzner CPX11 (2 vCPU, 2GB RAM) instance, got one lock around row 49000, then it resumed until the end.
Sure the actual write might only take 1s but the transaction might lock for 10ms.
Does SQLite support overlapping write transactions with locks on different rows?
Also do you know if WAL2 mode changes anything? https://www.sqlite.org/cgi/src/timeline?r=wal2
*typo, meant 1ms not 1s
Before you even need to consider postgres, you can batch your writes to sqlite!
I am no sqlite fanboy, although I might be, but I found the industry seems to run to postgres for just about anything. I prefer simplicity first.
How it that simplicity?
To batch updates makes the code far more complex.
To install any full strength DB is trivial.
I don't get the 'simplicity'?
That depends entirely on what you're doing. If your workload is heavily transactional, then sure, that might add complexity.
The simplicity is not having a separate process that can fail, and that requires fail over, and monitoring.
1 reply →