Comment by saisrirampur

10 hours ago

I’m a huge Postgres fan. That said, I don’t agree with the blanket advice of “just use Postgres.” That stance often comes from folks who haven’t been exposed enough to (newer) purpose-built technologies and the tremendous value they can create

The argument, as in this blog, is that a single Postgres stack is simpler and reduces complexity. What’s often overlooked is the CAPEX and OPEX required to make Postgres work well for workloads it wasn’t designed for, at even reasonable scale. At Citus Data, we saw many customers with solid-sized teams of Postgres experts whose primary job was constant tuning, operating, and essentially babysitting the system to keep it performing at scale.

Side note, we’re seeing purpose-built technologies show up much earlier in a company’s lifecycle, likely accelerated by AI-driven use cases. At ClickHouse, many customers using Postgres replication are seed-stage companies that have grown extremely quickly. We pulled together some data on these trends here: https://clickhouse.com/blog/postgres-cdc-year-in-review-2025...

A better approach would be to embrace the integration of purpose-built technologies with Postgres, making it easier for users to get the best of both worlds, rather than making overgeneralized claims like “Postgres for everything” or “Just use Postgres.”

I took it to mean “make Postgres your default choice”, not “always use Postgres no matter what”

  • I personally see a difference between “just use Postgres” and “make Postgres your default choice.” The latter leaves room to evaluate alternatives when the workload calls for it, while the former does not. When that nuance gets lost, it can become misleading for teams that are hitting or even close to hitting—the limits of Postgres, who may continue tuning Postgres spending not only time but also significant $$. IMO a better world is one where developers can have a mindset of using best-in-class where needed. This is where embracing integrations with Postgres will be helpful!

    • I think that the key point being made by this crowd, of which I'm one, is somewhere in the middle. The way I mean it is "Make Postgres your default choice. Also *you* probably aren't doing anything special enough to warrant using something different".

      In other words, there are people and situations where it makes sense to use something else. But most people believing they're in that category are wrong.

      2 replies →

    • The point is really that you can only evaluate which of alternatives is better once you have working product with data big enough - else it's just basically following trends and hoping your barely informed decision won't be wrong.

      10 replies →

  • This is my philosophy. When the engineer comes to me and says that they want to use NotPostgres, they have to justify why, with data and benchmarks, Postgres is not good enough. And that’s how it should be

> I don’t agree with the blanket advice of “just use Postgres.”

I take it as meaning use Postgres until there's a reason not to. ie build for the scale / growth rate you have not "how will this handle the 100 million users I dream of." A simpler tech stack will be simpler to iterate on.

  • Postgres on modern hardware can likely service 100 million users unless you are doing something data intensive with them.

    You can get a few hundred TB of flash in one box these days. You need to average over 1 MB of database data per user to get over 100 TB with only 100 million users. Even then, you can mostly just shard your DB.

  • Yes. That's a good framing. PostgreSQL is a good default for online LOB-y things. There are all sorts of reasons to use something other than PostgreSQL, but raw performance at scale becomes such a reason later than you think.

    Cloud providers will rent you enormous beasts of machines that, while expensive, will remain cheaper than rewriting for a migration for a long time.

In my experience the functionality of “purpose built systems” is found in Postgres but you have to read the manual.

I personally think reading manuals and tuning is a comparably low risk form of software development.

Postgres is infinitely extensible, more than MariaDB. But it's very painful to write or configure extensions and you might as well use something different instead of reaching for an extension mechanism.

> At Citus Data, we saw many customers with solid-sized teams of Postgres experts whose primary job was constant tuning, operating, and essentially babysitting the system to keep it performing at scale.

Oh no, not a company hiring a team of specialist in a core technology you need! What next, paying them a good wage? C'mon, it's so much better to get a bunch of random, excuse me, "specialized" SaaS tools that will _surely_ not lead to requiring five teams of specialists in random technologies that will eventually be discontinued once Google acquires the company running them.

OK but seriously, yeah sometimes "specialized" is good, though much less rarely than people pretend it to be. Having specialists ain't bad, and I'd say is better than telling a random developer to become a specialist in some cloud tech and pretending you didn't just end up turning a - hopefully decent - developer into a poor DBA. Not to mention that a small team of Postgres specialists can maintain a truly stupendous amount of Postgres.

  • At my company I saw a team of devs pay for a special purpose "query optimized" database with "exabyte capability" to handle... their totally ordinary HR data.

    I queried said database... it was slow.

    I looked to see what indexes they had set up... there were none.

    That team should have just used postgres and spent all the time and money they poured into this fancy database tech on finding someone who knew even a little bit about database design to help them.

I hate how developers are often very skeptical but all the skepticism goes out the window if the tech is sufficiently hyped up.

And TBH, developers are pretty dumb not to realize that the tech tools monoculture is a way for business folks to make us easily replaceable... If all companies use the same tech, it turns us into exchangeable commodities which can easily be replaced and sourced across different organizations.

Look at the typical React dev. They have zero leverage and can be replaced by vibe coding kiddies straight out of school or sourced from literally any company on earth. And there are some real negatives to using silver bullet tools. They're not even the best tools for a lot of cases! The React dev is a commodity and they let it happen to them. Outsmarted by dumb business folks who dropped out of college. They probably didn't even come up with the idea; the devs did. Be smarter people. They're going to be harvesting you like Cavendish.

  • Sure, but the world is vast. I would love to be able to test every UI framework and figure out which is the best, but who’s got time for that? You have to rely on heuristics for some things, and popularity is often a decent indicator.

    • Popularity’s flip side is that it can fuel commodification.

      I argue popularity is insufficient signal. React as tech is fine, but the market of devs who it is aimed at may not be the most discerning when it comes to quality.