← Back to context

Comment by ninkendo

5 hours ago

That’s what we should do, yes. The problem is that we were just sorta careless with interleaving database calls in with the “work” we were doing. So that function that calls that slow external service, also takes a &PgConnection as an argument, because it wants to bump a timestamp in a table somewhere after the call is complete. Which means you need to already have a connection open to even call that function, etc etc.

If the codebase is large, and full of that kind of pattern (interleaving db writes with other work), the compiler plugin is nice for (a) giving you a TODO list of all the places you’re doing it wrong, and (b) preventing any new code from doing this while you’re fixing all the existing cases.

One idea was to bulk-replace everything so that we pass a reference to the pool itself around, instead of a checked-out connection/transaction, and then we would only use a connection for each query on-demand, but that’s dangerous… some of these functions are doing writes, and you may be relying on transaction rollback behavior if something fails. So if you were doing 3 pieces of “work” with a single db transaction before, and the third one failed, the transaction was getting rolled back for all 3. But if you split that into 3 different short-lived connections, now only the last of the 3 db operations is rolled back. So you can’t just find/replace, you need to go through and consider how to re-order the code so that the database calls happen “logically last”, but are still grouped together into a single transaction as before, to avoid subtle consistency bugs.