← Back to context

Comment by andersmurphy

6 hours ago

Sqlite smokes postgres on the same machine even with domain sockets [1]. This is before you get into using multiple sqlite database.

What features postgres offers over sqlite in the context of running on a single machine with a monolithic app? Application functions [2] means you can extend it however you need with the same language you use to build your application. It also has a much better backup and replication story thanks to litestream [3].

- [1] https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

- [2] https://sqlite.org/appfunc.html

- [3] https://litestream.io/

The main problem with sqlite is the defaults are not great and you should really use it with separate read and write connections where the application manages the write queue rather than letting sqlite handle it.

Thing is though - either of those options is still multiple orders of magnitude faster than running on a remote host. Either will work, either will scale way farther than you reasonably expect it to.

> Sqlite smokes postgres on the same machine even with domain sockets [1].

SQLite on the same machine is akin to calling fwrite. That's fine. This is also a system constraint as it forces a one-database-per-instance design, with no data shared across nodes. This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS and you need to serve traffic beyond your local region then you have no alternative other than to have more than one instance of your service running in parallel. You can continue to shoehorn your one-database-per-service pattern onto the design, but you're now compelled to find "clever" strategies to sync state across nodes.

Those who know better to not do "clever" simply slap a Postgres node and call it a day.

  • > SQLite on the same machine is akin to calling fwrite.

    Actually 35% faster than fwrite [1].

    > This is also a system constraint as it forces a one-database-per-instance design

    You can scale incredibly far on a single node and have much better up time than github or anthropic. At this rate maybe even AWS/cloudflare.

    > you need to serve traffic beyond your local region

    Postgres still has a single node that can write. So most of the time you end up region sharding anyway. Sharding SQLite is straight forward.

    > This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS

    It's actually pretty good for running a real time multiplayer app with a billion datapoints on a 5$ VPS [2]. There's nothing clever going on here, all the state is on the server and the backend is fast.

    > but you're now compelled to find "clever" strategies to sync state across nodes.

    That's the neat part you don't. Because, for most things that are not uplink limited (being a CDN, Netflix, Dropbox) a single node is all you need.

    - [1] https://sqlite.org/fasterthanfs.html

    - [2] https://checkboxes.andersmurphy.com

    • > You can scale incredibly far on a single node

      Nonsense. You can't outrun physics. The latency across the Atlantic is already ~100ms, and from the US to Asia Pacific can be ~300ms. If you are interested in performance and you need to shave off ~200ms in latency, you deploy an instance closer to your users. It makes absolutely no sense to frame the rationale around performance if your systems architecture imposes a massive performance penalty in networking just to shave a couple of ms in roundtrips to a data store. Absurd.

      2 replies →

  • I wonder what percentage of services run on the Internet exceed a few hundred transactions per second.

    • I’ve seen multimillion dollar “enterprise” projects get no where close to that. Of course, they all run on scalable, cloud native infrastructure costing at least a few grand a month.

    • I think the better question to ask is what services peak at a few hundred transactions per second?

  • I mean, your "This is fine for" is almost literally the whole point of TFA, that you can go a long way, MRR-wise, with a simpler architecture.