Comment by kelseydh

6 hours ago

I recently did performance testing of Tigerbeetle for a financial transactions company. The key thing to understand about Tigerbeetle's speed is that it achieves very high speeds through batching transactions.

----

In our testing:

For batch transactions, Tigerbeetle delivered truly impressive speeds: ~250,000 writes/sec.

For processing transactions one-by-one individually, we found a large slowdown: ~105 writes/sec.

This is much slower than PostgreSQL, which row updates at ~5495 sec. (However, in practice PostgreSQL row updates will be way lower in real world OLTP workloads due to hot fee accounts and aggregate accounts for sub-accounts.)

One way to keep those faster speeds in Tigerbeetle for real-time workloads is microbatching incoming real-time transactions to Tigerbeetle at an interval of every second or lower, to take advantage of Tigerbeetle's blazing fast batch processing speeds. Nonetheless, this remains an important caveat to understand about its speed.

Doesn't the Tigerbeetle client automatically batch requests?

  • We didn't observe any automatic batching when testing Tigerbeetle with their Go client. I think we initiated a new Go client for every new transaction when benchmarking, which is typically how one uses such a client in app code. This follows with our other complaint: it handles so little you will have to roll a lot of custom logic around it to batch realtime transactions quickly.

    • I'm a bit worried you think instantiating a new client for every request is common practice. If you did that to Postgres or MySQL clients, you would also have degradation in performance.

      PHP has created mysqli or PDO to deal with this specifically because if the known issues with it

    • Interesting, I thought I had heard that this is automatically done, but I guess it's only through concurrent tasks/threads. It is still necessary to batch in application code.

      https://docs.tigerbeetle.com/coding/clients/go/#batching

      But nonetheless, it seems weird to test it with singular queries, because Tigerbeetle's whole point is shoving 8,189 items into the DB as fast as possible. So if you populate that buffer with only one item your're throwing away all that space and efficiency.

      1 reply →

Did the company end up using it?

  • We didn't rule out using Tigerbeetle, but the drop in non-batch performance was disappointing and a reason we haven't prioritised switching our transaction ledger from PostgreSQL to Tigerbeetle.

    There was also poor Ruby support for Tigerbeetle at the time, but that has improved recently and there is now a (3rd party) Ruby client: https://github.com/antstorm/tigerbeetle-ruby/