Comment by j45
12 days ago
Very few things dna start at an extremely high scale event processing.
There’s also an order of magnitude higher events when doing event based work in processing.
This seems like a perfectly reasonable starting and gateway points that can have things organized for when the time comes.
Most things don’t scale that big.
So perhaps don’t use kafka at all? E.g. Adyen used postgresql [1] as a queue until the outgrew. In this case it seems there are a lot of things that can go south in case of major issue on the event pipeline. Unless the throughput is low.. but then why kafka?
[1] https://www.adyen.com/knowledge-hub/design-to-duty-adyen-arc...
RDBMS are pretty well understood and very flexible, more still with the likes of JSONB where parts of your schema can be (de)normalized for convenience and reducing joins in practice. Modern hardware is MUCH more powerful today than even a decade and a half ago. You can scale vertically a LOT with an RDBMS like PostgreSQL, so it's a good fit for more use cases as a result.
Personally, at this point, I'm more inclined to reach for a few tools than to try to increase certain types of complexity. That said, I'm probably more inclined to introduce valkey/redis earlier on for some things, which I think may be better suited to MQ type duties without an actual MQ or more complex service bus over PG... but PG works.
Especially for systems that you aren't breaking up queues because of the number of jubs, so much as the benefits of a logical separation of the work from the requestor. Email (for most apps), report generation, etc... all types of work that an RDBMS is more than suitable for.
Probably not worth using a sledgehammer (Kafka) for an ant.
Lots of ppl do resume building only to realize rolls like Kafka at start vs scale can be very different.
It’s best to learn events from the ground up including how, when, and where you may outgrow existing implementation approaches let alone technologies.