Comment by Zambyte

4 days ago

When your application scales beyond one machine that needs access to the same database, PostgreSQL becomes an obviously better choice than SQLite. Until that point, SQLite is a fine, and honestly underrated choice.

DuckDB is another option worth considering.

Should the concept of "machines" really be a concern of the DB layer?

SQLite already allows multiple connections, so putting it on a server and adding a program that talks a network protocol and proxies the queries to the DB sounds more logical to me?

  • And after all of that you basically have something that looks like postgres or mysql.

    • My feeling is that I would have something better.

      Because I can use SQLite and its "a file is all you need" approach as long as I don't need multiple machines.

      And only bring in the other software (the proxy) when I need it.

  • High performance software is written acknowledging the reality that it will run on hardware. Databases tend to be a class of software that is hyper-focused on performance.

    Writing a networked application that uses SQLite as a database is perfectly reasonable. You're just making the decision to lift the layer of abstraction that is concerned with machines from the DB to your application, which may or may not be a reasonable thing to do.