Comment by imtringued

1 day ago

>But the moment the "fail save if something dies while processing a message" becomes a directly coupled with DB transactions you have created something very brittle and cumbersome.

The standard workflow for processing something from a queue is to keep track of all the messages you have already processed in the transactional database and simply request the remaining unprocessed messages. Often this is as simple as storing the last successfully processed message ID in the database and updating it in the same transaction that has processed the message. If an error occurs you roll the transaction back, which also rolls back the last message ID. The consumer will automatically re-request the failed message on the next attempt, giving you out of the box idempotency for at least once messaging.

My approach is to have fields for started/completed where started includes the system/process/timestamp of when an item was started... this gets marked as part of the process to tag and take the next item by the worker(s). It also allows for sweep and retry.

That said, I tend to reach for redis/rabbit or kafka relatively early depending on my specific needs and what's in use. Main use of a dbms queue historically is sending/tracking emails where the email service I had been using was having hiccups.