Ofc I wouldn't us it for extremely high scale event processing, but it's great default for a message/task queue for 90% of business apps. If you're processing under a few 100m events/tasks per day with less than ~10k concurrent processes dequeuing from it it's what I'd default to.
I work on apps that use such a PG based queue system and it provides indispensable features for us we couldn't achieve easily/cleanly with a normal queue system such as being able to dynamically adjust the priority/order of tasks being processed and easily query/report on the content of the queue. We have many other interesting features built into it that are more specific to our needs as well that I'm more hesitant to describe in detail here.
So perhaps don’t use kafka at all? E.g. Adyen used postgresql [1] as a queue until the outgrew.
In this case it seems there are a lot of things that can go south in case of major issue on the event pipeline. Unless the throughput is low.. but then why kafka?
Biggest thing to watch out with this approach is that you will inevitably have some failure or bug that will 10x, 100x, or 1000x the rate of dead messages and that will overload your DLQ database. You need a circuit breaker or rate limit on it.
I worked on an app that sent an internal email with stack trace whenever an unhandled exception occurred. Worked great until the day when there was an OOM in a tight loop on a box in Asia that sent a few hundred emails per second and saturated the company WAN backbone and mailboxes of the whole team. Good times.
The idea behind a DLQ is it will retry (with some backoff) eventually, and if it fails enough, it will stay there. You need monitoring to observe the messages that can't escape DLQ. Ideally, nothing should ever stay in DLQ, and if it does, it's something that should be fixed.
Learned something new today. I knew what FOR UPDATE did, but somehow I've never RTFM'd hard enough to know about the SKIP LOCKED directive. Thats pretty cool.
Yes, SKIP LOCKED is great. In practice you nearly always want LIMIT, which the article did not mention. Be careful if your selection spans multiple tables: only the relations you explicitly lock are protected (see SELECT … FOR UPDATE OF t1, t2). ORDER BY matters because it controls fairness and retry behaviour. Also watch ANALYZE: autoanalyze only runs once the dead to live tuple threshold is crossed, and on large or append heavy tables with lots of old rows this can lag, leading to poor plans and bad SKIP LOCKED performance. Finally, think about deletion and lifecycle: deleting on success, scheduled cleanup (consider pg_cron), or partitioning old data all help keep it efficient.
Great application of first principles. I think it's totally reasonable also, at even most production loads. (Example: My last workplace had a service that constantly roared at 30k events per second, and our DLQs would at most have orders of hundreds of messages in them). We would get paged if a message's age was older than an hour in the queue.
The idea is that if your DLQ has consistently high volume, there is something wrong with your upstream data, or data handling logic, not the architecture.
We strictly used AWS for everything and always preferred AWS-managed, so we always used SQS (and their built-in DLQ functionality). They made it easy to configure throttling, alerting, buffering, concurrency, retries etc, and you could easily use the UI to inspect the messages in a pinch.
As far as fixing actual critical issues - usually the message inside the DLQ had a trace that was revealing enough, although not always so trivial.
The philosophy was either:
1. fix the issue
2. swallow the issue (more rare)
but make sure this message never comes back to DLQ again
Segment uses MySQL as queue not even as DLQ. It works at their scale. So there are many (not all) systems that can tolerate this as queue.
I have simple flow: tasks are order of thousands an hour. I just use postgresql. High visibility, easy requeue, durable store. With appropriate index, it’s perfectly fine. LLM will write skip locked code right first time. Easy local dev. I always reach for Postgres for event bus in low volume system.
Why use shedlock and select-for-update-skip-locked? Shedlock stops things running in parallel (sort-of), but the other thing makes parallel processing possible.
It uses the same core primitives people are discussing here (FOR UPDATE SKIP LOCKED for claiming work; LISTEN/NOTIFY to wake workers), plus priorities, scheduled jobs, retries, heartbeats/visibility timeouts, and SQL-friendly observability. If you’re already on Postgres and want a pragmatic “just use Postgres” queue, it might be a useful reference / drop-in.
Indeed, pgmq is exactly the Postgres queueing system that you would build from scratch (for update skip locked and all that), except it's already built. Cloud providers should install this extension by default - it's in a really sweet spot for when you don't want or need a separate queue
It covers the race condition, the atomic claim behaviour, worker crashes, and how priorities and retries are usually layered on top. Very much the same approach described in the old 2ndQuadrant post, but with a modern end-to-end example.
Only slightly related, but I have been using Oban as a Postgres native message queue in the elixir ecosystem and loving it.
For my use case, it’s so much simpler than spinning up another piece of infrastructure like Kafka or rabbitmq
I haven't done a project that uses a database (be it sql or no sql) where the amount of deletes is comparable to the amount of inserts (and far larger than like tens per day, of course).
How does your average db server work with that, performance wise? Intuitively I'd think it's optimized more for inserts than for deletes, but of course I may be wrong.
Why use string as status, instead of a boolean? That just wastes space for no discernable benefit, especially since the status is indexed. Also, consider turning event_type into an integer if possible, for similar reasons.
Furthermore, why have two indexes with the same leading field (status)?
Boolean is rarely enough for real production workloads. You need a 'processing' state to handle visibility timeouts and prevent double-execution, especially if tasks take more than a few milliseconds. I also find it crucial to distinguish between 'retrying' for transient errors and 'failed' for dead letters. Saving a few bytes on the index isn't worth losing that observability.
> Boolean is rarely enough for real production workloads. You need a 'processing' ... 'retrying'... 'failed' ...
If you have more than 2 states, then just use integer instead or boolean.
> Saving a few bytes on the index isn't worth losing that observability.
Not sure why having a few well-known string values is more "observable" than having a few well-known integer values.
Also, it might be worth having better write performance. When PostgreSQL updates a row, it actually creates a new physical row version (for MVCC), so the less it has to copy the better.
We did this at Chargify, but with MySQL. If Redis was unavailable, it would dump the job as a JSON blob to a mysql table. A cron job would periodically clean it out by re-enqueuing jobs, and it worked well.
Care to elaborate? I do not understand how is this logging, it is quite opposite of logging as once the retry works the DLQ gets wiped out - would assume you would like logging to be persistent with at least a little bit of retention?
Another day, another “Using PostgreSQL for…” thing it wasn’t designed for. This isn’t a good idea. What happens when the queue goes down and all messages are dead lettered? What happens when you end up with competing messages? This is not the way.
The other system you're using that isn't Postgres can also go down.
Many developers overcomplicate systems. In the pursuit of 100% uptime, if you're not extremely careful, you removed more 9s with complexity than you added with redundancy. And although hyperscalers pride themselves on their uptime (Amazon even achieved three nines last year!) in reality most customers of most businesses are fine if your system is down for ten minutes a month. It's not ideal and you should probably fix that, but it's not catastrophic either.
What I’ve found is that, particularly with internal customers, they’re fine with an hour a month, possibly several, as long as not all of your eggs are in one basket.
The centralization pushes make a situation where if I have a task to do that needs three tools to accomplish, and one of them goes down, they’re all down. So all I can do is go for coffee or an early lunch because I can’t sub in another task into this time slot. They’re all blocked by The System being down, instead of a system being down.
If CI is borked I can work on docs and catch up on emails. If the network is down or NAS is down and everything is on that NAS, then things are dire.
There are a ton of job/queue systems out there that are based on SQL DBs. GoodJob and SupaBase Queues are two examples.
It’s not usable for high scale processing but most applications just need a simple queue with low depth and low complexity. If you’re already managing PSQL and don’t want to add more management to your stack (and managed services aren’t an option), this pattern works just fine. Go back 10-15yrs and it was more common, especially in Ruby shops, as teams willing to adopt Kafka/Cassandra/etc were more rare.
I think the PG designers would be surprised by the claim that it wasn't designed for this. Database designers try very hard to support the widest possible range of uses.
If all queue actions are failing instantly, you probably want a separate throttle to not remove them from the Kafka queue, since you'd rather keep them there and resume processing them normally instead of from the DLQ when queue processing is working again. In fact, the rate limit implicitly enforced by adding failure records to the DLQ helps with this.
How so? There are queues that use SQL (or no-SQL) databases as the persistence layer. Your question is more specific to the implementation, not the database as persistence layer itself. And there are ways to address it.
Postgres is essentially a b-tree with a remote interface. Would you use a b-tree to store a dead letter queue? What is big O of insert & delete? what happens when it grows?
Postgres has a query interface, replication, backup and many other great utilities. And it’s well supported, so it will work for low-demand applications.
Regardless, you’re using the wrong data structure with the wrong performance profile, and at the margins you will spend a lot more money and time than necessary running it . And service will suffer.
Ofc I wouldn't us it for extremely high scale event processing, but it's great default for a message/task queue for 90% of business apps. If you're processing under a few 100m events/tasks per day with less than ~10k concurrent processes dequeuing from it it's what I'd default to.
I work on apps that use such a PG based queue system and it provides indispensable features for us we couldn't achieve easily/cleanly with a normal queue system such as being able to dynamically adjust the priority/order of tasks being processed and easily query/report on the content of the queue. We have many other interesting features built into it that are more specific to our needs as well that I'm more hesitant to describe in detail here.
Very few things dna start at an extremely high scale event processing.
There’s also an order of magnitude higher events when doing event based work in processing.
This seems like a perfectly reasonable starting and gateway points that can have things organized for when the time comes.
Most things don’t scale that big.
So perhaps don’t use kafka at all? E.g. Adyen used postgresql [1] as a queue until the outgrew. In this case it seems there are a lot of things that can go south in case of major issue on the event pipeline. Unless the throughput is low.. but then why kafka?
[1] https://www.adyen.com/knowledge-hub/design-to-duty-adyen-arc...
2 replies →
Biggest thing to watch out with this approach is that you will inevitably have some failure or bug that will 10x, 100x, or 1000x the rate of dead messages and that will overload your DLQ database. You need a circuit breaker or rate limit on it.
I worked on an app that sent an internal email with stack trace whenever an unhandled exception occurred. Worked great until the day when there was an OOM in a tight loop on a box in Asia that sent a few hundred emails per second and saturated the company WAN backbone and mailboxes of the whole team. Good times.
This is the same risk with any DLQ.
The idea behind a DLQ is it will retry (with some backoff) eventually, and if it fails enough, it will stay there. You need monitoring to observe the messages that can't escape DLQ. Ideally, nothing should ever stay in DLQ, and if it does, it's something that should be fixed.
What do you use for the monitoring of DLQs?
1 reply →
This! Only thing worse than your main queue backing off is you dropping items from going into the DLQ because it can’t stay up.
If you can’t deliver to the DLQ, then what? Then you’re missing messages either way. Who cares if it’s down this way or the other?
Not necessarily. If you can't deliver the message somewhere you don't ACK it, and the sender can choose what to do (retry, backoff, etc.)
Sure, it's unavailability of course, but it's not data loss.
7 replies →
The point is to not take the whole server down with it. Keeps the other applications working.
Sure, but you still need to design around this problem. It’s going to be a happy accident that everything turns out fine if you don’t.
Could one put the DLQ messages on a queue and have a consumer ingest into pg?
(The queue probably isnt down if you've just pulled a message off it)
It will happen eventually in any system.
No need to look down on PG because it makes it more approachable and is more longer a specialized skill.
> FOR UPDATE SKIP LOCKED
Learned something new today. I knew what FOR UPDATE did, but somehow I've never RTFM'd hard enough to know about the SKIP LOCKED directive. Thats pretty cool.
Yes, SKIP LOCKED is great. In practice you nearly always want LIMIT, which the article did not mention. Be careful if your selection spans multiple tables: only the relations you explicitly lock are protected (see SELECT … FOR UPDATE OF t1, t2). ORDER BY matters because it controls fairness and retry behaviour. Also watch ANALYZE: autoanalyze only runs once the dead to live tuple threshold is crossed, and on large or append heavy tables with lots of old rows this can lag, leading to poor plans and bad SKIP LOCKED performance. Finally, think about deletion and lifecycle: deleting on success, scheduled cleanup (consider pg_cron), or partitioning old data all help keep it efficient.
I can see how that'd be extremely useful with LIMIT, especially with XA. Take a stride, complete it, or put it back for someone else.
Something I've still not mastered is how to prevent lock escalation into table-locks, which could torpedo all of this.
only learned about SKIP LOCKED because ChatGPT suggested it to solve some concurrency problem I had. Great tool to learn such things.
Great tool that wrote the blog post in the OP also, so it's quite versatile.
> CREATE INDEX idx_dlq_status ON dlq_events (status);
> CREATE INDEX idx_dlq_status_retry_after ON dlq_events (status, retry_after);
You don't need two indices when one is a prefix of another. Just one `idx_dlq_status_retry_after` will do the job.
Great application of first principles. I think it's totally reasonable also, at even most production loads. (Example: My last workplace had a service that constantly roared at 30k events per second, and our DLQs would at most have orders of hundreds of messages in them). We would get paged if a message's age was older than an hour in the queue.
The idea is that if your DLQ has consistently high volume, there is something wrong with your upstream data, or data handling logic, not the architecture.
What did you use for the DLQ monitoring? And how did you fix the issues?
We strictly used AWS for everything and always preferred AWS-managed, so we always used SQS (and their built-in DLQ functionality). They made it easy to configure throttling, alerting, buffering, concurrency, retries etc, and you could easily use the UI to inspect the messages in a pinch.
As far as fixing actual critical issues - usually the message inside the DLQ had a trace that was revealing enough, although not always so trivial.
The philosophy was either: 1. fix the issue 2. swallow the issue (more rare)
but make sure this message never comes back to DLQ again
Segment uses MySQL as queue not even as DLQ. It works at their scale. So there are many (not all) systems that can tolerate this as queue.
I have simple flow: tasks are order of thousands an hour. I just use postgresql. High visibility, easy requeue, durable store. With appropriate index, it’s perfectly fine. LLM will write skip locked code right first time. Easy local dev. I always reach for Postgres for event bus in low volume system.
Why use shedlock and select-for-update-skip-locked? Shedlock stops things running in parallel (sort-of), but the other thing makes parallel processing possible.
I maintain a small Postgres-native job queue for Python called PGQueuer: https://github.com/janbjorge/pgqueuer
It uses the same core primitives people are discussing here (FOR UPDATE SKIP LOCKED for claiming work; LISTEN/NOTIFY to wake workers), plus priorities, scheduled jobs, retries, heartbeats/visibility timeouts, and SQL-friendly observability. If you’re already on Postgres and want a pragmatic “just use Postgres” queue, it might be a useful reference / drop-in.
https://github.com/pgmq/pgmq
Indeed, pgmq is exactly the Postgres queueing system that you would build from scratch (for update skip locked and all that), except it's already built. Cloud providers should install this extension by default - it's in a really sweet spot for when you don't want or need a separate queue
re: SKIP LOCKED, introduced in postgres 9.5, here's an an archived copy [†] of the excellent 2016 2ndquadrant post discussing it
https://news.ycombinator.com/item?id=14676859
[†] it seems that all the old 2ndquadrant.com blog post links have been broken after their acquisition by enterprisedb
We just published a detailed walkthrough of this exact pattern with concrete examples and failure modes:
PostgreSQL FOR UPDATE SKIP LOCKED: The One-Liner Job Queue https://www.dbpro.app/blog/postgresql-skip-locked
It covers the race condition, the atomic claim behaviour, worker crashes, and how priorities and retries are usually layered on top. Very much the same approach described in the old 2ndQuadrant post, but with a modern end-to-end example.
Love your product. Will you ever provide support of duckdb/motherduck? Wish there is a generic way you provided to add any database type
1 reply →
Only slightly related, but I have been using Oban as a Postgres native message queue in the elixir ecosystem and loving it. For my use case, it’s so much simpler than spinning up another piece of infrastructure like Kafka or rabbitmq
Hmm that raises a question for me.
I haven't done a project that uses a database (be it sql or no sql) where the amount of deletes is comparable to the amount of inserts (and far larger than like tens per day, of course).
How does your average db server work with that, performance wise? Intuitively I'd think it's optimized more for inserts than for deletes, but of course I may be wrong.
Why use string as status, instead of a boolean? That just wastes space for no discernable benefit, especially since the status is indexed. Also, consider turning event_type into an integer if possible, for similar reasons.
Furthermore, why have two indexes with the same leading field (status)?
Boolean is rarely enough for real production workloads. You need a 'processing' state to handle visibility timeouts and prevent double-execution, especially if tasks take more than a few milliseconds. I also find it crucial to distinguish between 'retrying' for transient errors and 'failed' for dead letters. Saving a few bytes on the index isn't worth losing that observability.
> Boolean is rarely enough for real production workloads. You need a 'processing' ... 'retrying'... 'failed' ...
If you have more than 2 states, then just use integer instead or boolean.
> Saving a few bytes on the index isn't worth losing that observability.
Not sure why having a few well-known string values is more "observable" than having a few well-known integer values.
Also, it might be worth having better write performance. When PostgreSQL updates a row, it actually creates a new physical row version (for MVCC), so the less it has to copy the better.
1 reply →
Postgres does index de-duplication. So it's likely that even if you change the strings to enums, the index won't be that much smaller.
> Furthermore, why have two indexes with the same leading field (status)?
That indeed is a valid question.
I think that using Postgres as the message/event broker is valid, and having a DLQ on that Postgres system is also valid, and usable.
Having SEPARATE DLQ and Event/Message broker systems is not (IMO) valid - because a new point of failure is being introduced into the architecture.
We did this at Chargify, but with MySQL. If Redis was unavailable, it would dump the job as a JSON blob to a mysql table. A cron job would periodically clean it out by re-enqueuing jobs, and it worked well.
lol a FOR UPDATE SKIP LOCKED post hits the HN homepage every few months it feels like
and another CTO will use this meme as a reason to "just use Postgres" for far longer than they should lmao
I’ll take “just use Postgres” over “prematurely add three new systems” any day. Complexity has a cost too.
Using Postgres too long is probably less harmful than adding unnecessary complexity too early
1 reply →
Would be interesting to see the numbers this system processes. My bet is that they are not that high.
This is logging.
Care to elaborate? I do not understand how is this logging, it is quite opposite of logging as once the retry works the DLQ gets wiped out - would assume you would like logging to be persistent with at least a little bit of retention?
Another day, another “Using PostgreSQL for…” thing it wasn’t designed for. This isn’t a good idea. What happens when the queue goes down and all messages are dead lettered? What happens when you end up with competing messages? This is not the way.
The other system you're using that isn't Postgres can also go down.
Many developers overcomplicate systems. In the pursuit of 100% uptime, if you're not extremely careful, you removed more 9s with complexity than you added with redundancy. And although hyperscalers pride themselves on their uptime (Amazon even achieved three nines last year!) in reality most customers of most businesses are fine if your system is down for ten minutes a month. It's not ideal and you should probably fix that, but it's not catastrophic either.
What I’ve found is that, particularly with internal customers, they’re fine with an hour a month, possibly several, as long as not all of your eggs are in one basket.
The centralization pushes make a situation where if I have a task to do that needs three tools to accomplish, and one of them goes down, they’re all down. So all I can do is go for coffee or an early lunch because I can’t sub in another task into this time slot. They’re all blocked by The System being down, instead of a system being down.
If CI is borked I can work on docs and catch up on emails. If the network is down or NAS is down and everything is on that NAS, then things are dire.
1 reply →
>The other system you're using that isn't Postgres can also go down.
Only if DC gets nuked.
Many developers overcomplicate systems and throw a database at the problem.
7 replies →
There are a ton of job/queue systems out there that are based on SQL DBs. GoodJob and SupaBase Queues are two examples.
It’s not usable for high scale processing but most applications just need a simple queue with low depth and low complexity. If you’re already managing PSQL and don’t want to add more management to your stack (and managed services aren’t an option), this pattern works just fine. Go back 10-15yrs and it was more common, especially in Ruby shops, as teams willing to adopt Kafka/Cassandra/etc were more rare.
And there are a ton that aren’t.
I think the PG designers would be surprised by the claim that it wasn't designed for this. Database designers try very hard to support the widest possible range of uses.
If all queue actions are failing instantly, you probably want a separate throttle to not remove them from the Kafka queue, since you'd rather keep them there and resume processing them normally instead of from the DLQ when queue processing is working again. In fact, the rate limit implicitly enforced by adding failure records to the DLQ helps with this.
How so? There are queues that use SQL (or no-SQL) databases as the persistence layer. Your question is more specific to the implementation, not the database as persistence layer itself. And there are ways to address it.
Criticism without a better solution is only so valuable.
How would you do this instead, and why?
Watching a carpenter try to weld is equally only so valuable. I think the explanation is clear.
You wouldn't ack the message if you're not up to process it.
I prefer using MS Exchange mailboxes for my message queue.
Postgres is essentially a b-tree with a remote interface. Would you use a b-tree to store a dead letter queue? What is big O of insert & delete? what happens when it grows?
Postgres has a query interface, replication, backup and many other great utilities. And it’s well supported, so it will work for low-demand applications.
Regardless, you’re using the wrong data structure with the wrong performance profile, and at the margins you will spend a lot more money and time than necessary running it . And service will suffer.
What would you use?
for parity functionality and better performance, a Redis list .