← Back to context

Comment by infogulch

19 hours ago

I guess the idea is to have all writes go through a central server with local read replicas for improved read perf. The default litestream sync interval is 1s. I bet many use-cases would be satisfied with a few seconds delay for cross-region notifications.

It's good for pubsub but not for claim/ack workflow unless you do If-None-Match CAS semantics on a separate filesystem which, actually, yeah that's probably fine. Feels heavy on S3 ops. But! you do save on inter-AZ networking, the Warpstream hypothesis.

  • Claims kill this, IMO.

    Unless you have a single "reader", you don't mind the delay, and don't worry about redoing a bunch of notifications after a crash (and so, can delay claims significantly), concurrency will kill this.

    • I wrote a simple queue implementation after reading the Turbopuffer blog on queues on S3. In my implementation, I wrote complete sqlite files to S3 on every enqueue/dequeue/act. it used the previous E-Tag for Compare-And-Set.

      The experiment and back-of-the-envelope calculations show that it can only support ~ 5 jobs/sec. The only major factor to increase throughput is to increase the size of group commits.

      I dont think shipping CDC instead of whole sqlite files will change the calculations as the number of writes mattered in this experiment.

      So yes, the number of writes (min. of 3) can support very low throughputs.