Comment by jgavris

13 days ago

The Django ORM / migrations are still basically unmatched in happiness factor.

Its crazy to me after all these years that django-like migrations aren't in every language. On the one hand they seem so straightforward and powerful, but there must be some underlying complexities of having it autogenerate migrations.

Its always a surprise when i went to Elixir or Rust and the migration story was more complicated and manual compared to just changing a model, generating a migration and committing.

In the pre-LLM world, I was writing ecto files, and it was super repetitive to define make large database strucutres compared to Django.

  • Going from Django to Phoenix I prefer manual migrations. Despite being a bit tedious and repetitive, by doing a "double pass" on the schema I often catch bugs, typos, missing indexes, etc. that I would have missed with Django. You waste a bit of time on the simple schemas, but you save a ton of time when you are defining more complex ones. I lost count on how many bugs were introduced because someone was careless with Django migrations, and it is also surprising that some Django devs don't know how to translate the migrations to the SQL equivalent.

    At least you can opt-in to automated migrations in Elixir if you use Ash.

    • Django doesn't force anyone to use the automatic migrations, you can always write them manually if you want to :)

  • There are some subtle edge cases in the django migrations where doing all the migrations at once is not the same as doing migrations one by one. This has bitten me on multiple django projects.

  • There is no way to autogenerate migrations that work in all cases. There are lots of things out there that can generate migrations that work for most simple cases.

    • They don't need to work in every case. For the past `~15 years 100% of the autogenerated migrations to generating tables, columns or column names I have made just work. and i have made thousands of migrations at this point.

      The only thing to manually migrate are data migrations from one schema to the other.

  • well in elixir you can have two schemas for the same table, which could represent different views, for example, an admin view and a user view. this is not (necessarily) for security but it reduces the number of columns fetched in the query to only what you need for the purpose.

I found it very lacking in how to do CD with no downtime.

It requires a particular dance if you ever want to add/delete a field and make sure both new-code and old-code work with both new-schema and old-schema.

The workaround I found was to run tests with new-schema+old-code in CI when I have schema changes, and then `makemigrations` before deploying new-code.

Are there better patterns beyond "oh you can just be careful"?

  • This is not specific to Django, but to any project using a database. Here's a list of a couple quite useful resources I used when we had to address this:

    * https://github.com/tbicr/django-pg-zero-downtime-migrations

    * https://docs.gitlab.com/development/migration_style_guide/

    * https://pankrat.github.io/2015/django-migrations-without-dow...

    * https://www.caktusgroup.com/blog/2021/05/25/django-migration...

    * https://openedx.atlassian.net/wiki/spaces/AC/pages/23003228/...

    Generally it's also advisable to set a statement timeout for migrations otherwise you can end up with unintended downtime -- ALTER TABLE operations very often require ACCESS EXCLUSIVE lock, and if you're migrating a table that already has an e.g. very long SELECT operation from a background task on it, all other SELECTs will queue up behind the migration and cause request timeouts.

    There are some cases you can work around this limitation by manually composing operations that require less strict locks, but in our case, it was much simpler to just make sure all Celery workers were stopped during migrations.

  • I simplify it this way. I don't delete fields or tables in migrations once an app is in production. Only manually clean them up after they are impossible to be used by any production version. I treat the database schema as-if it were "append only" - Only add new fields. This means you always "roll-forward", a database. Rollback migrations are 'not a thing' to me. I don't rename physical columns in production. If you need an old field and a new field to be running simultaneously that represent the same datum, a trigger keeps them in sync.

  • You can do three stage:

    1. Make a schema migration that will work both with old and new code

    2. Make a code change

    3. Clean up schema migration

    Example: deleting a field:

    1. Schema migration to make the column optional

    2. Remove the field in the code

    3. Schema migration to remove the column

    Yes, it's more complex than creating one schema migration, but that's the price you pay for zero-downtime. If you can relax that to "1s downtime midnight on sunday", you can keep things simpler. And if you do so many schema migrations you need such things often ... I would submit you're holding it wrong :)

    • I'm doing all of these and None of it works out of the box.

      Adding a field needs a default_db, otherwise old-code fails to `INSERT`. You need to audit all the `create`-like calls otherwise.

      Deleting similarly will make old-code fail all `SELECT`s.

      For deletion I need a special 3-step dance with managed=False for one deploy. And for all of these I need to run old-tests on new-schema to see if there's some usage any member of our team missed.

  • One option is to do multi-stage rollout of your database schema and code, over some time windows. I recall a blog post here (I think) lately from some Big Company (tm) that would run one step from the below plan every week:

    1. Create new fields in the DB.

    2. Make the code fill in the old fields and the new fields.

    3. Make the code read from new fields.

    4. Stop the code from filling old fields.

    5. Remove the old fields.

    Personally, I wouldn't use it until I really need it. But a simpler form is good: do the required schema changes (additive) iteratively, 1 iteration earlier than code changes. Do the destructive changes 1 iteration after your code stops using parts of the schema. There's opposite handling of things like "make non-nullable field nullable" and "make nullable field non-nullable", but that's part of the price of smooth operations.

    • 2.5 (if relevant) mass-migrate data from the old column to the new column, so you don't have to wait forever.

  • Deploying on Kubernetes using Helm solves a lot of these cases: Migrations are run at the init stage of the pods. If successful, pods of the new version are started one by one, while the pods of the new version are shutdown. For a short period, you have pods of both versions running.

    When you add new stuff or make benign modifications to the schema (e.g. add an index somewhere), you won't notice a thing.

    If the introduced schema changes are not compatible with the old code, you may get a few ProgramingErrors raised from the old pods, before they are replaced. Which is usually acceptable.

    There are still some changes that may require planning for downtime, or some other sort of special handling. E.g. upgrading a SmallIntegerField to an IntegerField in a frequently written table with millions of rows.

100%

I am quite surprised that most languages do not have an ORM and migrations as powerful as Django. I get that it's Python's dynamic Meta programming that makes it such as clean API - but I am still surprised that there isn't much that comes close.

oh the automatic migrations scare the bejesus out of me. i really prefer writing out schemas and migrations like in elixir/ecto. plus i like the option of having two different schemas for the same table (even if i never use it)

  • You can ask Django to show you what exact SQL will run for a migration using `manage.py sqlmigrate`.

    You can run raw SQL in a Django migration. You can even substitute your SQL for otherwise autogenerated operations using `SeparateDatabaseAndState`.

    You have a ton of control while not having to deal with boilerplate. Things usually can just happen automatically, and it's easy to find out and intervene when they can't.

    https://docs.djangoproject.com/en/6.0/ref/django-admin/#djan...

    https://docs.djangoproject.com/en/6.0/ref/migration-operatio...

  • The nice thing in this case is that Django will meet you where you are with your preferences. Want to go the manual route? Sure. Want it to take a shot at auto-generation and then you customize? Very doable and. Want to let Django take the wheel fully the majority of the time? Sure.

    • is this like the "it takes 50 hours to set up a project management tool to work the way you want"? what happens if you onboard a superstar that works with django some other way?

      3 replies →

  • I have never done it, but I believe you could setup multiple schemas under the same database -by faking it as different databases and then use a custom router to flip between them as you like.

    That sounds like the path to madness, but I do believe it would work out of the box.