Comment by hackingonempty
8 hours ago
> The enterprise mindset dictates that you need an out-of-process database server. But the truth is, a local SQLite file communicating over the C-interface or memory is orders of magnitude faster than making a TCP network hop to a remote Postgres server.
I don't want to diss SQLite because it is awesome and more than adequate for many/most web apps but you can connect to Postgres (or any DB really) on localhost over a Unix domain socket and avoid nearly all of the overhead.
It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.
I don't think running Postgres on the same box as your app is the same class of optimistic over provisioning as setting up a kubernetes cluster.
Sqlite smokes postgres on the same machine even with domain sockets [1]. This is before you get into using multiple sqlite database.
What features postgres offers over sqlite in the context of running on a single machine with a monolithic app? Application functions [2] means you can extend it however you need with the same language you use to build your application. It also has a much better backup and replication story thanks to litestream [3].
- [1] https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...
- [2] https://sqlite.org/appfunc.html
- [3] https://litestream.io/
The main problem with sqlite is the defaults are not great and you should really use it with separate read and write connections where the application manages the write queue rather than letting sqlite handle it.
Thing is though - either of those options is still multiple orders of magnitude faster than running on a remote host. Either will work, either will scale way farther than you reasonably expect it to.
> Sqlite smokes postgres on the same machine even with domain sockets [1].
SQLite on the same machine is akin to calling fwrite. That's fine. This is also a system constraint as it forces a one-database-per-instance design, with no data shared across nodes. This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS and you need to serve traffic beyond your local region then you have no alternative other than to have more than one instance of your service running in parallel. You can continue to shoehorn your one-database-per-service pattern onto the design, but you're now compelled to find "clever" strategies to sync state across nodes.
Those who know better to not do "clever" simply slap a Postgres node and call it a day.
> SQLite on the same machine is akin to calling fwrite.
Actually 35% faster than fwrite [1].
> This is also a system constraint as it forces a one-database-per-instance design
You can scale incredibly far on a single node and have much better up time than github or anthropic. At this rate maybe even AWS/cloudflare.
> you need to serve traffic beyond your local region
Postgres still has a single node that can write. So most of the time you end up region sharding anyway. Sharding SQLite is straight forward.
> This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS
It's actually pretty good for running a real time multiplayer app with a billion datapoints on a 5$ VPS [2]. There's nothing clever going on here, all the state is on the server and the backend is fast.
> but you're now compelled to find "clever" strategies to sync state across nodes.
That's the neat part you don't. Because, for most things that are not uplink limited (being a CDN, Netflix, Dropbox) a single node is all you need.
- [1] https://sqlite.org/fasterthanfs.html
- [2] https://checkboxes.andersmurphy.com
13 replies →
https://antonz.org/sqlite-is-not-a-toy-database/ — 240K inserts per second on a single machine in 2021. The problem you describe is real, but the TPS ceiling is wrong by three orders of magnitude on modern hardware.
I wonder what percentage of services run on the Internet exceed a few hundred transactions per second.
2 replies →
I mean, your "This is fine for" is almost literally the whole point of TFA, that you can go a long way, MRR-wise, with a simpler architecture.
FYI, the color gradient on your website is an easy tell that it was vibe coded: https://prg.sh/ramblings/Why-Your-AI-Keeps-Building-the-Same...
A blog that's 11 years old and uses a minimalist CSS framework https://picocss.com ?
It's a static blog that renders markdown... there's literally nothing to code, let alone vibe code.
Looks like the overhead is not insignificant:
(https://gist.github.com/leifkb/1ad16a741fd061216f074aedf1eca...)
I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries. If you’re doing CRUD, simulating that workflow may favor Postgres given the transactional read/write work that needs to take place across multiple concurrent connections.
> I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries.
Exactly. Back in the real world,anyone who is faced with that sort of usecase will simply add memory cache and not bother with the persistence layer.
1 reply →
This is mostly about thread communication. With SQLite you can guarantee no context switching. Postgres running on the same box gets you close but not all the way. It's still in a different process.
This. Run an app on the same box as PG and you can easily be plagued by out of memory etc (as there's memory contention between the two processes).
Most important is that that local SQLite gets proper backups, so a restore goes without issues
Gets proper backups if you back it up the right way https://sqlite.org/backup.html
Would be nice to see PGLite[1] compared too
1: https://pglite.dev/
A total performance delta of <3s on ~300k transactions is indeed the definition of irrelevant.
Also:
> PostgreSQL (localhost): (. .) SQLite (in-memory):
This is a rather silly example. What do you expect to happen to your data when your node restarts?
Your example makes as much sense as comparing Valkey with Postgres and proceed to proclaim that the performance difference is not insignificant.
Why are you comparing PostgreSQL to an in-memory SQLite instead of a file-based one? Wow, memory is faster than disk, who would have thought?
Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:
(https://gist.github.com/leifkb/d8778422d450d9a3f103ed43258cc...)
4 replies →
It is insignificant if you're doing 100k queries per day, and you gain a lot for your 3 extra seconds a day.
What a useful "my hello-world script is faster than your hello-world script" example.
I have used SQLite with extensions in extreme throughput scenarios. We’re talking running through it millions of documents per second in order to do disambiguation. I won’t say this wouldn’t have been possible with a remote server, but it would have been a significant technical challenge. Instead we packed up the database on S3, and each instance got a fresh copy and hammered away at the task. SQLite is the time tested alternative for when you need performance, not features
> It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.
Isn't this idea to spend a bit more effort and overhead to get YAGNI features exactly what TFA argues against?
I've been doing that for decades.. People seem to simply not know about unix architecture.
What I like about sqlite is that it's simply one file
But ... when you use the WAL mode, you have 3 files :-)
I mean, you’re not wrong about the facts, but it’s also pretty trivial to migrate the data from SQLite into a separate Postgres server later, if it turns out you do need those features after all. But most of the time, you don’t.
I bet that takes more time than the 5 extra minutes you take to setup Postgres in the same box upfront.
Thats just swapping another enterprise focused concern into the mix. Your database connection latency is absolutely not a concerning part of your system.
Author's own 'auth' project works with sqlite and postgres.
IIRC TCP/IP through localhost actually benchmarked faster than Unix sockets because it was optimized harder. Might've been fixed now. Unix sockets gives you the advantage of authentication based on the user ID of who's connecting.
My experience with sqlite for server-based apps has been that as your app grows, you almost always eventually need something bigger than sqlite and need to migrate anyway. For a server-based app, where minimizing deployment complexity isn't an extremely important concern, and with mixed reads and writes, it's rarely a bad idea to use Postgres or MariaDB from the start. Yes there are niche scenarios where sqlite on the server might be better, but they're niche.
ORDERS OF MAGNITUDE NEWS