← Back to context

Comment by ruslan_talpa

9 years ago

Short answer no, it doesn't (at least now, release 1 week ago :) ), but dismissing it only for that would be a superficial look. Most cases don't need those guarantees, i.e. you can tolerate a few lost messages.

For example, you are implementing real time updates in your app using this. What's the probability of a network glitch happening at the same time as two users being logged in at the same time in one system where an event produced by one needs to be synced to the other, even more, say he lost that event, is it that critical considering he will soon move to another screen and reload the data entirely?

From rabbitmq point of view, db&bridge are producers. You are really asking here, does the "producer" guarantee delivery? To do that, it means the producer needs to become himself a "queue" system in case he fails to communicate with rabbit.

Considering we are talking web here, the producers are usually scripts invoked by a http call so there is no such guarantee in any system (when communication to rabbitmq fails).

However i think network (in the datacenter) is quite reliable so there is no point in overengineering for that case.

If the system can tolerate a few seconds of downtime, it's easy enough to implement a heartbeat system which would restart this tool in case it's needed, also, you can run 2-3 of them to make it redundant then use corelationId to dedup the messages.

A more robust tool would be https://github.com/confluentinc/bottledwater-pg but it's for kafka and the major downside for me is that it cant be used with RDS since it's installed as a plugin to postgresql