Comment by kelseydh

5 hours ago

We didn't observe any automatic batching when testing Tigerbeetle with their Go client. I think we initiated a new Go client for every new transaction when benchmarking, which is typically how one uses such a client in app code. This follows with our other complaint: it handles so little you will have to roll a lot of custom logic around it to batch realtime transactions quickly.

I'm a bit worried you think instantiating a new client for every request is common practice. If you did that to Postgres or MySQL clients, you would also have degradation in performance.

PHP has created mysqli or PDO to deal with this specifically because of the known issues of it being expensive to recreate client connects per request

Interesting, I thought I had heard that this is automatically done, but I guess it's only through concurrent tasks/threads. It is still necessary to batch in application code.

https://docs.tigerbeetle.com/coding/clients/go/#batching

But nonetheless, it seems weird to test it with singular queries, because Tigerbeetle's whole point is shoving 8,189 items into the DB as fast as possible. So if you populate that buffer with only one item your're throwing away all that space and efficiency.

  • We certainly are losing that efficiency, but this is typically how real-time transactions work. You write real-time endpoints to send off transactions as they come in. Needing to roll more than that is a major introduction of complexity.

    We concluded where Tigerbeetle really shines is if you're a large entity like a central bank or corporation sending massive transaction files between entities. Tigerbeetle is amazing for moving large numbers of batch transactions at once.

    We found other quirks with Tigerbeetle that made it difficult as a drop-in replacement for handling transactions in PostgreSQL. E.g. Tigerbeetle's primary ID key isn't UUIDv7 or ULID, it's a custom id they engineered for performance. The max metadata you can save on a transaction is a 128-bit unsigned integer on the user_data_128 field. While this lets them achieve lightning fast batch transaction processing benchmarks, the database allows for the saving of so little metadata you risk getting bottlenecked by all the attributes you'll need to wrap around the transaction in PostgreSQL to make it work in a real application.