← Back to context

Comment by robertclaus

2 days ago

What are the use cases for this? I can't imagine designing a database schemas to use this in a typical product. Is it intended for hybrid applications to back up local user data directly with their account info?

I can think of plenty.

The most interesting one for me is if you're running a SaaS product like Notion where your users create custom applications that manage their own small schema-based data tables.

Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?

I'd much rather manage tens of thousands of rows in a table where one of the columns is a BLOB with a little SQLite database in it.

  • > Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?

    Yea, I'd be fine with that - postgres has the concept of databases and schemas within those databases. If you really want to build a product like that I'd suggest starting with per-tenant schemas that leverage table inheritance as appropriate. The permissions would be pretty easy to manage.

    Though, in a lot of cases I've actually seen this done every client ends up with a dedicated server (or container - whatever tech you use to do it, something completely isolated from other instances) because user version management ends up being a huge issue. When you're building something that custom it's highly likely that version migrations need to be done with client oversight to ensure everything actually works.

    I have yet to find an actual real world case where the inner-platform effect is the right solution. Usually when tools like that are selected the software ends up being so generic and flexible that's it's useless. Custom application/BI environment development relies on really judiciously telling users they can't have most features - with the hard part being figuring out which features are necessary and which ones you can cut to reduce bloat.

    • Notion has 100 million users, managing schema-per-tenant at our scale sounds like a complexity nightmare. We have 480+ identical schemas across 100+ Postgres hosts, and that already takes a lot of brainpower & engineering time to manage T_T

      4 replies →

  • Why not use jsonb for this kind of thing? Store the schema in one table, one per client, or perhaps one per table per client, and then store the data for that in another table, segregated by customer and table type, with row data stored in a JSONB field using that table's schema.

    I normally don't like using JSONB when I could have a rigorous schema, but this sort of application seems reasonable.

You manage a fleet of devices that need to get operating parameters regularly, but they're complicated and SQLite is a great mechanism for sending that.

So at the backend you have a postgres database that contains the device details etc as well as the operating parameters for that device.

You can update the operating parameters as part of a postgres transaction so either all the BLOBs are updated, or none.

Using /tmp on the postgres cluster (server) is a bit of a hack, it would be nicer to have memory based SQLite blobs.

In terms of security, you get Postgres row level security, so each SQLite value is protected in the same way as the rest of the row.

The top line of the README says: "Embed an SQLite database in your PostgreSQL table. AKA multitenancy has been solved."

But I'm still having trouble trying to grok the intricacies of it. In a sense, I guess it has well isolated individual SQLite DBs and you'd have to go out of your way to join over them. With that said, does PostgreSQL manage and pool all the writes correctly? So you don't need to worry about SQLite concurrency issues?

  • If by solving multinenancy they mean

      CREATE TABLE tenants (
        id     BIGINT NOT NULL,
        database SQLITE DEFAULT execute_sqlite(
            empty_sqlite(),
            'CREATE TABLE users (etc.)'
            and all the other tables
            for each tenant
        )
      );
    

    then they don't need to make joins between sqlite dbs.

    Your other concerns are very real. Those sqlite dbs could become very large. I prefer the use case depicted in another reply: preparing sqlite dbs before shipping them to their own devices. Or maybe receiving them and performing analysis, maybe after having imported it in overall psql tables. Or similar scenarios in which all the db is read or written at once. Anyway, once we have a tool we start using it.

    • > then they don't need to make joins between sqlite dbs.

      The extension could also provide custom index access methods (considering that SQLite only has a handful of column types in the first place.) That would allow you to incorporate the keys in the index heaps, as opposed to table heaps, boom, you get bitmap index scans for Joins, i.e. GIN but with a bit more redundancy.

  • Each of the columns is an instance of a SQLite database, so I assume (without looking at the source) that they properly multi-thread as needed.

    So there's not cross-SQLite-database connections or multiple writers going on.

  • You could join over them, but not really in the way you're thinking.

    Each of the columns that are databases would be updated when the functions execute.

    You could do weird crap like INSERT/DELETE as part of a postgres level SELECT.

    • You can do that with any function already. This isn't new because of nested databases.

I’m thinking maybe you’d like to use litefs for multi-tenant dbs close to the tenant. But perhaps you’ll want a centralized billing/reports database under postgres as well?

So, instead of saving the client sqlite db of the org to cloud storage you save it to the centralized db column instead. Litefs probably doesn’t support it yet, but wouldn’t be too hard to add.