← Back to context

Comment by jorangreef

3 months ago

> Selling an event out takes a long time to do frequently because tickets are VERY frequently not purchased--they're just reserved and then they fall back into open seating.

TigerBeetle actually includes native support for "two phase pending transfers" out of the box, to make it easy to coordinate with third party payment systems while users have inventory in their cart:

https://docs.tigerbeetle.com/coding/two-phase-transfers/

> Also, an act like Oasis is going to have a lot of reserved seating. Running through algorithms to find contiguous seats is going to be tougher than this example and it's difficult to parallelize if you're truly giving the next person in the queue the actual best seats remaining.

It's actually not that hard (and probably easier) to express this in TigerBeetle using transfers with deterministic IDs. For example, you could check (and reserve) up to 8K contiguous seats in a single query to TigerBeetle, with a P100 less than 100ms.

> There are many other business rules that accrue after years of features to win Oasis like business unfortunately that will result in more DB calls and add contention.

Yes, contention is the killer.

We added an Amdahl's Law calculator to TigerBeetle's homepage to let you see the impact: https://tigerbeetle.com/#general-purpose-databases-have-an-o...

As you move "the data to the code" in interactive transactions with multiple queries, to process more and more business rules, you're holding row locks across the network. TigerBeetle's design inverts this, to move "the code to the data" in declarative queries, to let the DBMS enforce the transactional business rules directly in the database, with a rich set of debit/credit primitives and audit trail.

It's almost like stored procedures were a good idea.

  • If only. But you also need to fix the internal concurrency control of the DBMS storage engine. TB here is very different to PG.

    For example, if you have 8K transactions through 2 accounts, a naive system might read the 2 accounts, update their balances, then write the 2 accounts… for all 8K (!) transactions.

    Whereas TB does vectorized concurrency control: read the 2 accounts, update them 8K times, write the 2 accounts.

    This is why stored procedures only get you typically about a 10x win, you don’t see the same 1000x as with TB, especially at power law contention.

    • Huge fan of what tiger beatle promotes. Even in simple system/projects batching and reducing contention can be massive win. Batching + single application writer alone in something like sqlite can get you to pretty ridiculous inserts/updates per second (although transactions become at the batch level).

      I sometimes wonder how many fewer servers we would need if the aproaches promoted by Tiger Style were more widespread.

      What datasteucture does Tiger Beatle use for it's client? I'm assuming its multi writer single reader. I've always wondered what the best choice is there. A reverse LMAX disruptor (multiple producers single consumer).