OpenAI: Scaling PostgreSQL to the Next Level

9 months ago (pixelstech.net)

I was at PGConf last week, and this was one of the most packed talks - a great insight into using Postgres, where most of the conference was fairly inward facing, with talks around the development of Postgres itself (pgconf.dev is very much that one, out of all the others each year).

What you have to remember is that for many teams, when their product takes off, they are not equipped with the deep internal knowledge of how to scale a particular part of their stack. This was an awesome story from a small team having to tackle those challenges, and how they were learning as they went. So, while there are some of those "can't you just", and "whats interesting about this?" comments here, with the narrative of the growth rate, and the very high profile of the product, it was the perfect user talk for a internal development focused conference.

The key insight, and main message, of the talk was that if you are not too write heavy you can scale Postgres to very high read throughput with read replicas and only a single master! That is exactly the message that needs to be spelled out as that covers the vast majority of apps.

As an observation, in the Q&A at the end of the talk the questions, primarily from core Postgres developers, were focused on learning about the use case, and not an opportunity to suggest that they were doing anything wrong (not quite the same as this thread could get). A genuinely awesome group of very friendly and welcoming people in the Postgres community.

  • > if you are not too write heavy you can scale Postgres to very high read throughput with read replicas and only a single master

    The number of interviewees (I do the sys design question) who want to jump straight into massively distributed eventually consistent complicated systems for 5 reads/second is too damn high. 1,000,000 users is not a lot.

    I wish we did better with teaching folks that while we (as an industry) were focused on horizontal this and that, computers got fast. And huge. Amazon will rent you a 32TB RAM server these days. Your database will scale just fine and ACID is far too valuable to throw.

    • > The number of interviewees (I do the sys design question) who want to jump straight into massively distributed eventually consistent complicated systems for 5 reads/second is too damn high. 1,000,000 users is not a lot.

      Not only can you get quite far w/ PG and w/o sharding, but if you run out of write bandwidth you might be able to (depending on what your schema looks like) shard your PGs. For example, suppose you have a ride sharing type app: you might have one DB per market with only user and driver data for that market, then have the apps go to the user's home market for authentication, and to the local market for rides (similarly if a driver ends up driving someone to a different market, you can add them there temporarily).

  • > The key insight, and main message, of the talk was that if you are not too write heavy you can scale Postgres to very high read throughput with read replicas and only a single master! That is exactly the message that needs to be spelled out as that covers the vast majority of apps.

    This is exactly the message I wanted to convey in the talk—thank you so much! -Bohan

  • You seem to be implying this thread is unkind to the team. Given most of the experience here, probably many are intellectually attracted by the architectural problem of scaling a product like ChatGPT, while examining how a company with almost unlimited funding approaches hiring.

    Statements like these: "The presentation also specifically mentioned that using ORM can easily lead to inefficient queries and should be used cautiously."

    Shows they are not experienced enough, to run this type of infrastructure at scale.

    • How does that statement convey inexperience? The presentation doesn't try to present that as a novel conclusion, it's just a true statement.

> The presentation also specifically mentioned that using ORM can easily lead to inefficient queries and should be used cautiously.

Every ORM is bad. Especially the "any DB" ORMs. Because they trick you into thinking about your data patterns in terms of writing application code, instead of writing code for the database. And most of the time their features and APIs are abstracted in a way that basically means you can only use the least-common-denominator features of all the database backends that they can support.

I've sworn off ORMs entirely. My application is a Postgres application first and foremost. I use PG-specific features extensively. Why would I sacrifice all the power that Postgres offers me just for some conveniences in Python, or Ruby, or whatever?

Nah. Just write good SQL for your database and the whole system will be very happy.

  • At one time we had to switch our service out of DB2 into psql. We had to do this with minimum downtime !! ORM helped us since it abstracted the DB (to the orm the sql dialect was a plugin). The application was largely untouched. Also, not all developers can roll their own sqls and sqls embedded in code make it harder to refactor or reason with and eventually sqls will be abstracted into a crud library of sorts anyway.

    • > Also, not all developers can roll their own sqls

      If you can't write SQL, don't use an RDBMS until you learn it. This sounds like gatekeeping because it is: I don't understand why so many people are willing to accept that they need to know their main language to use it, but not that they need to know SQL to use it.

  • I've been using the Django ORM for years and it feels like an amazing piece of software.

    But recently I started using sqlc. Which turns my queries into Go (simplification). I think this is actually the sweet spot between ORM and rawdogging SQL.

    • Django ORM is great if you want it to manage your entire data model and are OK with living entirely within it's little world. But it can't support even basic stuff like composite primary keys, complex filters in a join clause, or any kind of lateral joins at all. It's also among the biggest offenders at hidden (and explosive) N+1s all over your codebase and templates unless you pay very special care. Also, the SQL it generates is usually very naive and often-times just not even performant.

Self-hosting postgres is appealing from a flexibility standpoint (don't be locked out of superuser or advanced features), but it sounds a little bit nerve wracking.

I'm now hoping all the cloud providers read this article and start exposing the feature to try disabling an index in the query planner before you drop it for real, that should really become standard procedure. Just in case.

But if you're a large scale company to the point of wanting to own and customize your stack, it can definitely make sense to self-host.

  • Speaker here — Bohan from OpenAI.

    Just to clarify, we're using Azure Database for PostgreSQL, not a self-hosted setup. I mentioned "Azure Postgres" multiple times during the talk, but I should have been more specific that we're referring to Microsoft’s managed PostgreSQL service. Apologies for the confusion.

    • Sorry, I see this has caused some confusion, I should have clarified in my comment. I was responding to the section by Lao Feng at the end, where their conclusion tries to make the case for Pigsty and self-hosted pg.

  • There are already several ways to force postgres to use or not use a particular index. We use cloud-hosted and managed Postgres instances on one of the major provider, and I've used a couple of these strategies to massage the query planner.

    1. Fiddle with the Query Planner settings (this can be done on a per query level as well, so its not global). E.g. enable_indexscan=off, enable_indexonlyscan=off

    2. Add a trivial calculation to the filter clause of the query. E.g. select * from table where indexed_col + 0 = 12345 shouldn't use the index, as the planner won't do the arithmetic.

    3. Use the pg_hintplan extension, which allows you to add comments to your code to urge the planner to use certain data access strategies, including specifying which indices to use. See: https://pg-hint-plan.readthedocs.io/en/latest/hint_table.htm...

    • That second trick is neat, I'll need to remember that. I also really wish I had known about it back in 2015, when we had to massage an inconveniently big (at the time) PG database and started to carve it out into smaller, purpose-specific instances.

      Being able to verify that an index was either useless or inefficient without jumping through hoops would have saved quite a lot of time.

  • (Disclosure: I'm from Azure's Postgres team)

    Clarification: OpenAI does not self-host Postgres. They use Azure's managed PostgreSQL offering (aka Azure Database for PostgreSQL Flexible server).

  • > I'm now hoping all the cloud providers read this article and start exposing the feature to try disabling an index in the query planner before you drop it for real

    I'm surprised there isn't non-superuser DDL to handle this. For example in MySQL you can ALTER an index to make it INVISIBLE (or equivalently in MariaDB, IGNORED) and the planner won't use it.

  • > but it sounds a little bit nerve wracking.

    > But if you're a large scale company ... it can definitely make sense to self-host.

    I'm not a large company like OpenAI and I've been running various PostgreSQL setups for years—ranging from single-node instances without replication to multi-node, fault-tolerant, highly available configurations with automatic failover and backups, serving 4-5 digits of updates, selects, and inserts per second. I'm not sure what you're referring to, but in my experience, once it's set up, it's almost maintenance-free.

    • No large scale database heterogeneous cluster is maintenance-free - plans change because data changes, query patterns change, resource utilization of certain features grows or shrinks, new application features are launched, new indexes need to be added, things are constantly migrated and might need a special case because the default migration strategy doesn't perfectly handle giant bulk changes, etc etc etc etc

      1 reply →

  • self hosting psql is trivial - what is the scary part? thats how we used it for decades until things like RDS came around.

    • I would never say that self-hosting anything is trivial. Linux administration, tuning, and troubleshooting can be learned, obviously, and the same is true of RDBMS. Neither is a trivial skillset if you want to actually be able to run stuff at scale, though. There's a massive difference between 25 QPS and 25,000 QPS (TFA states ~40 replicas, and an aggregate of 1,000,000 QPS).

      That shouldn't deter anyone from trying, though. You can't learn if you don't try.

    • Back in the 2000s, Database Administrator (DBA) was a pretty popular job title, and they usually got paid way more than regular software developers. It probably wouldn’t have been like that if managing databases was "trivial".

  • this sort of thing https://www.oracle.com/uk/autonomous-database/what-is-autono... ?

    "An autonomous database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs. Unlike a conventional database, an autonomous database performs all these tasks and more without human intervention."

  • > but it sounds a little bit nerve wracking.

    As long as you're doing backups (you are doing backups, right?), and validate that those backups work (you are validating that those backups work, right?), what's making you nervous about it?

    • Doing backups and validating backups is very error-prone and time consuming. To me this reads as: "If you do all the hard complex work yourself, what's making you nervous about it?"

      It's far easier to do backups and database hosting at scale. Database failures are rare, so it's this one-off situation that you have to be prepared for. That requires clearly defined processes, clearly defined roles and responsibilities, and most importantly: feedback from unfortunate incidents that you can learn from. All that is very hard to accomplish when you do self-hosting.

      2 replies →

    • Yeah, that's what is a bit odd to me. I feel like AWS and everything like that is much more of a black box compared to some things like Postgres that is so fully tested, proven to be reliable, etc.

    • I think managing stateless infrastructure is much easier, if anything goes haywire you can expect a readiness probe to fail, k8s quietly takes down the instance, and life continues with no downtime.

      It is also perfectly possible to roll your own highly-available Postgres setup, but that requires a whole another set of precise configuration, attention to details, caring about the hardware, occasionally digging into kernel bugs, and so forth that cloud providers happily handle behind the scene. I'm very comfortable with low-level details, but I have never built my own cloud.

      I do test my backups, but having to restore anything from backups means something has gone catastrophically wrong, I have downtime, and I probably have lost data. Everything to prevent that scenario is what's making me sweat a little bit

      3 replies →

    • In Spain when you want to travel by high-speed train, you need to though security check, just like at an airport. Do the security checks make sense? No. But nobody wants to be the politician that removes the security checks, and then something bad happens. So the security checks stay.

    • Backups with periodic restore validation (which is not trivial) are a must, but don’t make database maintenance any less nerve wracking.

      Sure, you won’t lose data, but the downtime …

> Feature Requests

> Concerning schema changes: they desire PostgreSQL to record a history of schema change events, such as adding or removing columns and other DDL operations.

You can do this right now today by using `EVENT TRIGGER`s. You can check out things like Aquameta[0] (if I remember correctly) to see how it's done.

[0] https://github.com/aquametalabs/aquameta

  • We are building similar functionality for our own postgres setup right now.

    Of course, postgres is very powerful and you can implement anything like this in many different ways. But at the same time, maintaining DDL history and tracking major changes to the database is a very common requirement, and unfortunately many people don't realize that until they learned that lesson the hard way.

    Relatedly are not DDL changes per-se, but big/important db operations that you want to also keep a record of so that you can look back and understand why something changed. I am not sure if this is the right term, but basically when we update our pricing model or skus, or set custom pricing for someone, we want those updates to be "auditable".

    Actually, I think this is a relatively common use case too: a fully relational model often leaves you with a large number of "static" tables that only change when you're making updates to your application. They support the smaller number of big, important, dynamic tables that your application is regularly modifying. If you had the foresight to recognize that you'd likely need to change those static tables in the future, you probably organized them so you could do so by adding new rows. It is not quite a DDL change but it is a big, risky change to your application logic that only happens rarely, and you basically just want to keep a list of all those big changes in case things get messed up or you find yourself unable to make sense of older data.

    • There's lots of extensions that automatically create audit tables and record history in them. The trick is to a) create audit tables for all _existing_ tables, b) create an event trigger so you can have audit tables created automatically for any future new tables. Here's an example of how to do it: https://github.com/twosigma/postgresql-contrib/blob/master/a...

      Another thing you might do is to go with a schema that follows the event shipping pattern. In this pattern you have the "truth" held in insert-only tables (deletes and updates not allowed), then turn those "event" tables into ones you can query naturally using VIEWs, MATERIALIZED VIEWs, or live tables that you update with triggers on the event tables. Then your event tables _are your history/audit_ tables.

      2 replies →

    •     CREATE TABLE public.audit (
              id uuid NOT NULL,
              created_time timestamp without time zone DEFAULT now() NOT NULL,
              schema_name text NOT NULL,
              table_name text NOT NULL,
              record_id uuid NOT NULL,
              user_name text,
              action text NOT NULL,
              old_data jsonb,
              new_data jsonb
          );
      
          CREATE OR REPLACE FUNCTION audit_if_modified_func() RETURNS TRIGGER AS $body$
          DECLARE
              v_old_data JSONB;
              v_new_data JSONB;
          BEGIN
              IF (TG_OP = 'UPDATE') THEN
                  v_old_data := to_jsonb(OLD.*);
                  v_new_data := to_jsonb(NEW.*);
                  IF (TG_TABLE_NAME::TEXT = 'users') THEN
                      v_old_data = v_old_data - 'last_login_time';
                      v_new_data = v_new_data - 'last_login_time';
                  END IF;
                  IF (v_old_data <> v_new_data) THEN
                      INSERT INTO audit (id,record_id,schema_name,table_name,user_name,action,old_data,new_data)
                          VALUES (uuid_generate_v4(), NEW.id, TG_TABLE_SCHEMA::TEXT, TG_TABLE_NAME::TEXT,
                                  session_user::TEXT, substring(TG_OP,1,1), v_old_data, v_new_data);
                  END IF;
                  RETURN NEW;
              ELSIF (TG_OP = 'DELETE') THEN
                  v_old_data := to_jsonb(OLD.*);
                  INSERT INTO audit (id, record_id,schema_name,table_name,user_name,action,old_data)
                  VALUES (uuid_generate_v4(), OLD.id, TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data);
                  RETURN OLD;
              ELSIF (TG_OP = 'INSERT') THEN
                  v_new_data := to_jsonb(NEW.*);
                  INSERT INTO audit (id, record_id,schema_name,table_name,user_name,action,new_data)
                  VALUES (uuid_generate_v4(), NEW.id, TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_new_data);
                  RETURN NEW;
              ELSE
                  RAISE WARNING '[AUDIT_IF_MODIFIED_FUNC] - Other action occurred: % at %',TG_OP,now();
                  RETURN NULL;
              END IF;
          
          EXCEPTION
              WHEN data_exception THEN
                  RAISE WARNING '[AUDIT_IF_MODIFIED_FUNC] - UDF ERROR [DATA EXCEPTION] - SQLSTATE: % SQLERRM: %',SQLSTATE,SQLERRM;
                  RETURN NULL;
              WHEN unique_violation THEN
                  RAISE WARNING '[AUDIT_IF_MODIFIED_FUNC] - UDF ERROR [UNIQUE] - SQLSTATE: % SQLERRM: %',SQLSTATE,SQLERRM;
                  RETURN NULL;
              WHEN OTHERS THEN
                  RAISE WARNING '[AUDIT_IF_MODIFIED_FUNC] - UDF ERROR [OTHER] - SQLSTATE: % SQLERRM: %',SQLSTATE,SQLERRM;
                  RETURN NULL;
          END;
          $body$
          LANGUAGE plpgsql
          SECURITY DEFINER;
          
          CREATE TRIGGER audit_logger_accounts
              AFTER INSERT OR UPDATE OR DELETE ON accounts
              FOR EACH ROW EXECUTE PROCEDURE audit_if_modified_func();
      
          CREATE TRIGGER audit_logger_users
              AFTER INSERT OR UPDATE OR DELETE ON users
              FOR EACH ROW EXECUTE PROCEDURE audit_if_modified_func();

      2 replies →

  • We use event triggers both in pgroll [0] and pgstream [1] to capture DDL changes. It's a feature of pgroll that it captures the schema migration history, regardless if you do the changes via pgroll or with another tool. And pgstream uses it to put the schema changes in the logical replication stream.

    One thing to be aware is that on many Postgres DBaaS EVENT TRIGGERS are not allowed, because they generally require superuser. But RDS and Aurora do support them, we (Xata) support them of course, and I think Supabase is working on adding support for them.

    [0]: https://github.com/xataio/pgroll

    [1]: https://github.com/xataio/pgstream

I don't deny how powerful PostgreSQL is, but it appears that the decision to choose PostgreSQL was the most significant issue all along. It's shocking how often we find teams selecting Postgres to implement a solution that it's not suitable for. Yet, you see a document with a forced and non-logical narrative trying to justify the decision.

With the NewSQL options available today, which provide distributed relational databases with multi-masters out of the box, it seems to me that many teams select Postgres simply because that's all they know, and that's the source of the problem.

Not super interesting, this is fairly basic stuff that you'll encounter at orders of magnitude smaller scale than OpenAI. Creating indexes CONCURRENTLY, avoiding table rewrites, smoothing out traffic, tx timeouts, read replicas... It's pretty much table stakes, even at 10000x smaller scale.

Their requests to Postgres devs aren't anything new either, everyone has wished for it for years.

The title is kind of misleading: they're not scaling it to the "next level", they're clearly struggling with this single-master setup and trying to keep it afloat while migrating off ("no new workloads allowed"). The main "next scale" point is that they say they can "scale gracefully under massive read loads" - nothing new, that's the whole point of read replicas and horizontal scaling.

Re: "Lao Feng Q&A":

> PostgreSQL actually does have a feature to disable indexes. You can simply set the indisvalid field to false in the pg_index system catalog [...] It’s not black magic.

No. It's not documented for this use, so it's not a feature. It's fooling around with internals without guarantees of what this will do (it might do what you want today, it might not in the next release). Plus as they point out, managed Postgres providers don't let you fiddle with this stuff (for good reasons, as this is not a feature).

> there’s a simpler solution [to avoiding accidental deletion of used indexes]: just confirm via monitoring views that the index is not being used on either primary or replicas

That doesn't quite solve all the same problems. It's quite frequent that an index is in use, but is not _needed_: another index would also work (eg you introduced a new index covering an extra column that's not used in this query). Being able to disable an index would allow checking that the query plan does use the other index, rather than praying and hoping.

  • > they're clearly struggling with this single-master setup and trying to keep it afloat while migrating off ("no new workloads allowed").

    TFA states they’re at 1 million QPS, in Azure. 1 million QPS with real workloads is impressive, doubly so from a cloud provider that’s almost certainly using network-based storage.

    EDIT: they have an aggregate of 1 million QPS across ~40 read replicas, so 25K QPS each, modulo writes. I am less impressed.

    > That doesn't quite solve all the same problems. It's quite frequent that an index is in use, but is not _needed_: another index would also work (eg you introduced a new index covering an extra column that's not used in this query). Being able to disable an index would allow checking that the query plan does use the other index, rather than praying and hoping.

    Assuming your table statistics are decently up to date and representative (which you can check), this basically comes down to knowing your RDBMS, and your data. For example, if it's a text-type column, do both indices have the same operator class (or lack thereof)? Does the new index have a massive column in addition to the one you need, or is it reasonably small? Do the query projections and/or selections still form a left-most prefix of the index (especially important if any queries perform ordering)?

    • > EDIT: they have an aggregate of 1 million QPS across ~40 read replicas, so 25K QPS each, modulo writes. I am less impressed.

      Yeah that's my point! That's the load per instance that I see at my current company, we just have fewer replicas.

      > Assuming your table statistics are decently up to date and representative (which you can check), this basically comes down to knowing your RDBMS, and your data

      I'm pretty good at this stuff, and I don't often dabble with complex indexes. And yet I don't have 100% confidence. No-one is perfect: maybe I made a mistake in assessing index equivalence, maybe I forgot to check replicas, maybe there's _something somewhere_ that depends on this index without me being aware of it... It's a destructive operation where the only confidence you can have is _theoretical_, not operational: it's kind of crazy and people have been requesting this feature for years for good reasons. If you get it wrong (and getting it right is not trivial), production is on fire and it's potentially hours of downtime (or days, if it's a massive table!).

      For example, RDS forces you to shutdown an instance before deleting it. At this point, if anything was relying on it then alarms go off and you can quickly turn it back on. This should be standard functionality of anything stateful.

      1 reply →

  • I'm pretty perplexed as well. They mention that they're not sharding PostgreSQL, without mentioning why in the article, but isn't that an obvious issue to many of their scaling problems?

    I don't really see what it is that they're doing that requires a single master database, it seems that sharding on a per user basis would make things way easier for them.

    • I can sympathise with wanting to stay on a single master architecture as long as possible. Sharding is a major step change in terms of maintenance burden.

    • My guess is that they didn’t design their schema around multi tenancy, and going back to pick apart all the shared tables, rewrite queries etc… will take them more time than they want to spend.

They seem to be using physical replication. I'm currently thinking of switching to logical replication to reduce inter region egress cost.

Do you think that's a good idea? There seems to be many improvements to native logical replication since Postgres 17.

I wonder how much their performance can improve if they put the write-instance on dedicated servers (with local and very fast ssd) and use managed services only for read-replicas.

What I find odd about this is there's no mention of all the other engines that must be in the mix powering different types of queries: I have no doubt they're using a little of everything, from scaling key-value to search, vector search, caches... They must be doing summersaults to avoid over-saturating this over-saturated Postgres env... yet only Postgres is discussed here.

Title should really be "Scaling PostgreSQL to the Next Level at OpenAI", which is the actual title of the talk

Argh. Shard the damn database already.

Why are they not sharing by user/org yet? It is so simple and would fix the primary issue they are running into.

All these work arounds they go through to avoid a straight forward fix.

  • The message of the talk was very much that you can scale to massive throughput without having to shard and having only a single master.

    Of course they considered it, but the tradeoffs didn't match what they wanted to do - plus they found you could scale to this level without sharding.

    • The talk seems to be mostly about all the limitations and workarounds they've had to deal with, because they choose not to shard. Apparently, they have a policy of adding no new functionality to the database, which presumably means additional separate database services being setup for each new feature. That sounds a lot like accumulating tech debt very rapidly, just because sharding is not on the table, for whatever reason.

      1 reply →

  • Speaker here — Bohan from OpenAI.

    Our application has hundreds of endpoints, which makes sharding non-trivial. We've already offloaded shardable workloads—particularly write-heavy ones—from PostgreSQL. What remains is primarily read-only and would require substantial effort to shard. Currently, the workload scales well on Azure Database for PostgreSQL, and we have sufficient headroom to support future growth.

    That said, we're not ruling out sharding in the future—it’s just not a near-term priority.

  • Not sure I would qualify sharding a DB that get 1M qps as straight forward. I agree with you that it seems that an org would be a natural sharding key, but we know that at this scale, nothing really is ever straight forward, especially when it's your first rodeo.

    • > Not sure I would qualify sharding a DB that get 1M qps as straight forward.

      Sharding at the application layer (basically figure out the shard from org/user in your application code prior to interacting with the DB), will scale to any QPS rate. This is what I was referring to.

    • That's true, but that's also why you really should shard long before hitting that point...

      If your company is growing at this insane rate, it should be obvious that eventually you must shard. And the longer you delay this, the more painful it will be to accomplish.

  • Sharding is often not simple. The whole reason you're using a powerful database in the first place is that you want its ability to analyze data and answer complex questions. If you didn't you might as well just use a bunch of NFS mounts: it's sharding and even simpler than a database.

"He finally concluded with some requests to the postgres developer community"

... You're one of the most well funded companies in the world, you shouldn't be asking for features to open aource devs, but you should be opening PRs

  • That assumes that every problem can be solved with money, and that the money (or the PRs the money funds) will be welcomed by the community. By talking about what OpenAI needs Bohan is hopefully starting a conversation with the community in order to engage with everyone respectfully and to work with them.

    Railroading an open source project with money or dev time in order to force it to go in the direction you want is not the right way. Those things should be available if the community asks, but they shouldn't be the opening offer.

  • Going to PGconf.dev, explaining what you are doing to the core devs, the problems you faced, and what you would love to have as feature in Postgres, is exactly the right way to go about improving Postgres.

    Blindly opening "PRs" (Postgres doesn't work in this way, it's diffs on a mailing list) would not get you anywhere very fast. You need buy in and consensus from the wider development team.

  • Just like with any other OSS project, you have to convince the upstream developers that your PR is worth merging.

    You don't do that by throwing PR's over the wall and then moving on. You do that by being part of the community.

    That said sometimes you just don't have the resources to engage another community at the moment, so you push the PR over the wall anyway, assume it won't ever land and act accordingly.

    The smaller and less impactful the change, the bigger chance of it landing. I'm always clear with my PR's that I push over the wall though: I probably won't be around to maintain this, feel free to not merge, etc. I also try to thank them for their service to the community and share how their code made my life easier.

  • It does not matter if you open a PR if the project is not open to that solution. You can't just add any random thing to Postgres as an outsider. You first have to convince the people in the project that X is a good thing to add and only after that does it make sense to actually implement it (and even then unlikely that OpenAI has a developer on staff with enough Postgres experience to actually write it. More likely they will then just sponsor a dev from the project).

  • > ... You're one of the most well funded companies in the world, you shouldn't be asking for features to open aource devs, but you should be opening PRs

    PG is really difficult to contribute to because it's such a fast moving target. You get your patches into one commitfest and then you don't get them accepted in time, now you're into the next commitfest, and now you have to rebase across 1,000 commits, lather, rinse, repeat.

    Contributing to PG is nearly a full-time job!

    I bet it's much easier to find an existing committer to PG and pay them as consultants to do the work you need.

    And as siblings point out, you have to figure out what the upstream might be willing to accept, and they might have to tell you that. This requires a conversation. Presenting to them is a way to start that conversation.

  • that's not how that works.

    Money doesn't mean "I built whatever I want and Postgres will evolve into whatever I want by pushing my code". They still need to align and plan. Sure, they'll build the things and contribute, but they don't own it and still need to accomodate the wishes of the project

OpenAI and these companies hires inexperienced people with zero operational experience and this is how they run things. It's almost funny if you didn't see how unreliable the end result was.

Postgres is powerful but just not suited for this role. But if your only tool is a hammer...

Many of these scaling issues would be solved, if they simply used AWS and Aurora Postgres.

https://pages.cs.wisc.edu/~yxy/cs764-f20/papers/aurora-sigmo...

https://youtu.be/pUqVCK7Ggh0

  • Aurora is Postgres compatible but it’s not equivalent. It was my understanding that the underlying implementation was distinctly different from regular Postgres. So while maybe not as risky as dropping in MySQL, dropping in aurora is still akin to switching to a brand new db and certainly not simple.

    Or are you saying they should have started on Aurora from the start?

    • Yes they should have started with Aurora Postgres. Note that for Aurora Postgres you can just import a Postgres backup and its the same SQL Code. Its Postgres+plus....

  • Oh yeah, separating compute and storage by a large physical distance was a great idea that certainly had no downsides! It's so awesome that they added "Optimized Reads" as an option, which is literally just running the DB on a server with a local NVMe drive - you know, how people used to do things.

    The only feature that Aurora (MySQL) has that is remotely impressive is its ability to restart the DB process without losing the buffer pool. Aurora (Postgres) has no interesting differentiations.

    I've benchmarked both, with prod-like workloads, against some 12 year old Dell R620s I have, which have NVMe drives exposed via Ceph over Infiniband. The ancient servers handily beat Aurora and RDS on everything except when the latter had an instance type with a local NVMe drive, at which point it's just superior clock speed and memory throughput.

    I despise Aurora with a burning passion. AWS successfully hoodwinked companies everywhere with bullshit, and are absolutely raking in cash because of it.

    • > The only feature that Aurora (MySQL) has that is remotely impressive

      Aurora is one of the only options if you need low-lag physical replication in a MySQL-compatible environment. That makes it operationally feasible to execute large/heavy writes or DDL which would normally cause too much replication lag on traditional (async binlog-based) MySQL replicas.

      Granted, there's some important fine print: long transactions will still block InnoDB purge of old row versions, and in Aurora that's cluster-wide. But in any case, personally I'd consider nearly-lag-free replication to be an important differentiator. This can be leveraged in interesting ways, for example CashApp's `spirit` OSC tool (https://github.com/block/spirit) can do online schema changes blazing-fast because it doesn't need to throttle its write rate to avoid replication lag.

      Scale-to-zero is also nice for dev/test environments.

      That said, I do agree with your overall point that Aurora was majorly over-marketed. And Amazon's capture of so much revenue in the MySQL space has been extremely detrimental for the MySQL ecosystem, especially considering Aurora's modifications are proprietary/closed-source.

      1 reply →

    • > The only feature that Aurora (MySQL) has that is remotely impressive is its ability to restart the DB process

      I dont really care about Aurora MySQL...only Aurora Postgres, but you forgot about Parallel Query and Clones. For clones you dont pay for the extra storage for the new database, only the delta if you add new data...

      https://aws.amazon.com/blogs/aws/new-parallel-query-for-amaz...

      https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database...

      "...AWS successfully hoodwinked companies everywhere with bullshit, and are absolutely raking in cash because of it."

      Really?...

      "How Twilio modernized its billing platform on Amazon Aurora MySQL" - https://aws.amazon.com/blogs/database/how-twilio-modernized-...

      "No observable Aurora downtime taken in over 5 months of experimentation, and almost 2 months of running shadow production..

      Steady state metrics on over 40 accumulated days of live production data across all Aurora clusters:

      - Over 46 billion transaction records indexed and available, compared to less than one billion stored in the former online Redis system

      - 4.8 TB of data across all tables

      - Over 11,000 active database connections to all clusters

      - Less than 10 milliseconds median end-to-end transaction run latency

      - Less than 60 milliseconds 99th percentile end-to-end transaction run latency..."

      "Increasing Scalability and Reducing Costs Using Amazon Aurora Serverless with BMW" - https://aws.amazon.com/solutions/case-studies/bmw-group-auro...

      "FINRA CAT selects AWS for Consolidated Audit Trail" - https://aws.amazon.com/blogs/publicsector/finra-cat-selects-...

      https://aws.amazon.com/rds/aurora/customers/

      3 replies →

  • Surely someone can articulate the flaw...Unless, of course, there isn’t one worth mentioning....

    • They're deployed on Azure and have a deep partnership with Microsoft, so they can't "simply" use a different cloud.

      Also, recommending a black box managed solution isn't an option for some large companies that have their own hardware & datacenters and which may want to use open source solutions they can easily deploy, fork and support themselves to keep costs under control.

      3 replies →