Comment by prisenco

4 days ago

The biggest one is redundancy. Architecting with Read replicas is much easier with Postgres than Sqlite because of it's server model.

Sqlite on the server is a fantastic starter database. Dead simple to set up, highly performant and scales way higher (vertically) than anyone gives it credit for.

But there certainly is a point you'll have to scale out instead of up, and while there are some great solutions for that (rqlite, litefs, dqlite, marmot) it's not inherent to Sqlite's design.

Should replication really be a concern of the DB layer?

Replication means writing queries which alter the data to multiple machines, right?

Shouldn't that be done by a software one level up? Which takes in the queries via some network protocol and then sends them to all machines.

That would sound more logical to me.

  • Historically, yes. Databases were software that were concerned with both storage and networking.

    It's fine to want to separate those out, but it's not easy to do so and there are reasons they've been coupled for decades.

    • What makes it hard?

      Having a single DB that takes write queries via a proxy which spreads them out to multiple read-only-DBs sounds easy at first.

      3 replies →