Comment by awesome_dude

7 hours ago

Sorry, but what's stopping the DLQ being a different topic on that Kafka - I get that the consumer(s) might be dead, preventing them from moving the message to the DLQ topic, but if that's the case then no messages are being consumed at all.

If the problem is that the consumers themselves cannot write to the DLQ, then that feels like either Kafka is dying (no more writes allowed) or the consumers have been misconfigured.

Edit: In fact there seems to be a self inflicted problem being created here - having the DLQ on a different system, whether it be another instance of Kafka, or Postgres, or what have you, is really just creating another point of failure.

> Edit: In fact there seems to be a self inflicted problem being created here - having the DLQ on a different system, whether it be another instance of Kafka, or Postgres, or what have you, is really just creating another point of failure.

There's a balance. Do you want to have your Kafka cluster provisioned for double your normal event intake rate just in case you have the worst-case failure to produce elsewhere that causes 100% of events to get DLQ'd (since now you've doubled your writes to the shared cluster, which could cause failures to produce to the original topic).

In that sort of system, failing to produce to the original topic is probably what you want to avoid most. If your retention period isn't shorter than your time to recover from an incident like that, then priority 1 is often "make sure the events are recorded so they can be processed later."

IMO a good architecture here cleanly separates transient failures (don't DLQ; retry with backoff, don't advance consumer group) from "permanently cannot process" (DLQ only these), unlike in the linked article. That greatly reduces the odds of "everything is being DLQ'd!" causing cascading failures from overloading seldom-stressed parts of the system. Makes it much easier to keep your DLQ in one place, and you can solve some of the visibility problems from the article from a consumer that puts summary info elsewhere or such. There's still a chance for a bug that results in everything being wrongly rejected, but it makes you potentially much more robust against transient downstream deps having a high blast radius. (One nasty case here is if different messages have wildly different sets of downstream deps, do you want some blocking all the others then? IMO they should then be partitioned in a way so that you can still move forward on the others.)

  • I think that you're right to mention that if the DLQ is over used that that potentially cripples the whole event broker, but I don't think that having a second system that could fall over for the same reason AND a host of other reasons is a good plan. FTR I think doubling kafka provisioned capacity is simpler, easier, cheaper, and more reliable approach.

    BUT, you are 100% right to point to what i think is the proper solution, and that is to treat the DLQ with some respect, not a bit bucket where things get dumped because the wind isn't blowing in the right direction.