I'm in the middle of doing a major version upgrade of postgres from pg15 to pg18 using the same logical replication techniques that this article talks about. As this article mentions, dropping the indexes on the new database before replication is key. Otherwise the replication takes forever because each insert needs to update the index as well.
This is the same even if doing bulk or batched inserts. MS's process of importing an exported DB (when using that method, i.e. with Azure SQL rather than an on-prem restore to on-prem where a page-level backup is of course much more efficient) doesn't create indexes until after the full data import has completed.
I recently learned that on RDS you can import data from S3. Handy feature for accomplishing similar goal: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...
I'm in the middle of doing a major version upgrade of postgres from pg15 to pg18 using the same logical replication techniques that this article talks about. As this article mentions, dropping the indexes on the new database before replication is key. Otherwise the replication takes forever because each insert needs to update the index as well.
This is the same even if doing bulk or batched inserts. MS's process of importing an exported DB (when using that method, i.e. with Azure SQL rather than an on-prem restore to on-prem where a page-level backup is of course much more efficient) doesn't create indexes until after the full data import has completed.