← Back to context

Comment by jstrong

21 days ago

can't believe postgres still uses a process-per-connection model that leads to endless problems like this one.

You can't process significantly many more queries than you've got CPU cores at the same time anyway.

  • Much of the time in a transaction can reasonably be non-db-cpu time, be it io wait or be it client CPU processing between queries. Note I'm not talking about transactions that run >10 seconds, just ones with the queries themselves technically quite cheap. At 10% db-CPU-usage, you get a 1 second transaction from just 100ms of CPU.

    • That many long-running transactions seem like a pretty unusual workload to me and potentially running into isolation issues. I can see running a few of these, but not a lot, especially at the same time?

      1 reply →

    • In a properly optimized database absolute majority of queries will hit indices and most data will be in memory cache, so majority of transactions will be CPU or RAM bound. So increasing number of concurrent transactions will reduce throughput. There will be few transactions waiting for I/O, but if majority of transactions are waiting for I/O, it's either horrifically inefficient database or very non-standard usage.

      5 replies →

    • A process that is blocked for io, whether network or disk, will get taken off the cpu and another process put on the cpu. It doesn’t just waste the cpu until the quanta is gone.

  • redis is single-threaded but handles lots of connections (i.e. > 500) with much better performance vs. postgres. there's zero chance someone building postgres in 2025 would do one process per connection, I don't think there's any argument that it's a good design for performance. it's just a long-ago design choice that would be difficult to change now.

  • I disagree. If that was the case, pgBouncer wouldn't need to exist.

    The problem of resource usage for many connections is real.

    • It's about queueing work, not running all these queries at the same time. You can run pgbouncer or you can have a pool on your backend. Having more connections won't make it go faster, so that really seems like a low-priority thing for postgres to me. Even if you integrated pooling into postgres the overhead of auth would be still taking time for small queries anyway.

      1 reply →