Comment by paulkre
19 days ago
Can’t believe they needed this investigation to realize they need a connection pooler. It’s a fundamental component of every large-scale Postgres deployment, especially for serverless environments.
19 days ago
Can’t believe they needed this investigation to realize they need a connection pooler. It’s a fundamental component of every large-scale Postgres deployment, especially for serverless environments.
Pooling connections somewhere has been fundamental for several decades now.
Fun quick anecdote: a friend of mine worked at an EA subsidiary when Sim City (2013) was released, to great disaster as the online stuff failed under load. Got shifted over to the game a day after release to firefight their server stuff. He was responsible for the most dramatic initial improvement when he discovered the servers weren't using connection pooling, and instead were opening a new connection on almost every single query, using up all the connections on the back end DB. EA's approach had been "you're programmers, you could build the back end", not accepting games devs accurately telling them it was a distinct skill set.
No? It sounds like they rejected the need for a connection pooler and took an alternative approach. I imagine they were aware of connection poolers and just didn't add one until they had to.
can't believe postgres still uses a process-per-connection model that leads to endless problems like this one.
You can't process significantly many more queries than you've got CPU cores at the same time anyway.
Much of the time in a transaction can reasonably be non-db-cpu time, be it io wait or be it client CPU processing between queries. Note I'm not talking about transactions that run >10 seconds, just ones with the queries themselves technically quite cheap. At 10% db-CPU-usage, you get a 1 second transaction from just 100ms of CPU.
9 replies →
redis is single-threaded but handles lots of connections (i.e. > 500) with much better performance vs. postgres. there's zero chance someone building postgres in 2025 would do one process per connection, I don't think there's any argument that it's a good design for performance. it's just a long-ago design choice that would be difficult to change now.
I disagree. If that was the case, pgBouncer wouldn't need to exist.
The problem of resource usage for many connections is real.
2 replies →
I was surprised too to need it in front of RDS (but not on vanilla, as you pointed out).
In serverless world for sure but in old-school architecture it's common to use persistent connections to a database which make connection pooler less essential. Also the last time I did check (many years ago admittedly) connection poolers didn't play well with server-size prepared statements and transactions.
pgbouncer added support for prepared statements a couple years back.