← Back to context

Comment by kunley

2 days ago

Speed, anyone?

How long does it take to update a table of, say, 1k rows? 1m rows? Same when subqueries and joins are involved to calculate what's to be updated?

The current implementation is writing out the DB to `/tmp` then reading the resulting file back and writing it to the column.

So on the bright side updating 1k rows takes the same amount of time as updating one row. On the other hand every write is a full table write (actually two).

I don't think there is a way to do this efficently with the current API as PostgreSQL is MVCC so it needs to write out each version separately (unless it has some sort of support of partial string sharing, I don't think so). Maybe a better version of this would write every page of the SQLite DB as a separate row so that you only need to update the changed pages.

  • >The current implementation is writing out the DB to `/tmp` then reading the resulting file back and writing it to the column.

    I think there was already another comment where someone told OP how to solve that.