← Back to context

Comment by fatal94

7 days ago

Sure, if you're working on a small homelab with minimal to no processing volume.

The second you approach any kind of scale, this falls apart and/or you end up with a more expensive and worse version of Kafka.

I think there is a wide spectrum between small-homelab and google scale.

I was surprised how far sqlite goes with some sharding on modern SSDs for those in-between scale services/saas

  • What you're doing is fine for a homelab, or learning. But barring any very specific reason other than just not liking Kafka, its bad. The second that pattern needs to be fanned out to support even 50+ producers/consumers, the overhead and complexity needed to manage already-solved problems becomes a very bad design choice.

    Kafka already solves this problem and gives me message durability, near infinite scale out, sharding, delivery guarantees, etc out of the box. I do not care to develop, reshard databases or production-alize this myself.

    • sqlite can do 40,000 transactions per second, that's going to be a lot more than 'homelab' (home lab).

      Not everything needs to be big and complicated.

    • Some people don't and won't need 50+ producers/consumers for a long while, if ever. Rewriting the code at that point may be less costly than operating Kafka in the interim. Kafka is also has a higher potential for failure than sqlite.

      3 replies →

"Any kind of scale" No, there's a long way of better and more straightforward solutions than the simple SELECT

(SELECT * from EVENTS where TIMESTAMP > LAST_TS LIMIT 50) for example