Every ORM is bad. Especially the "any DB" ORMs. Because they trick you into thinking about your data patterns in terms of writing application code, instead of writing code for the database. And most of the time their features and APIs are abstracted in a way that basically means you can only use the least-common-denominator of all the database backends that they can support.
I've sworn off ORMs entirely. My application is a Postgres application first and foremost. I use PG-specific features extensively. Why would I sacrifice all the power that Postgres offers me just for some conveniences in Python, or Ruby, or whatever?
Coming from Javaland to C#, Entity Framework is a breath of fresh air.
The Npgsql driver automatically applies PG-specific tricks without me having to do anything special
The only path I had to do myself is the data ingress point that had some race condition issues, everything else seems to perform pretty well out of the box.
Entity Framework really is such a time saver. The PG adapter makes it a breeze not just with common queries, but also more recent stuff, like working with embeddings for vector search.
Nah. The most prolific backend frameworks are all built on ORMs for good reason. The best ones can deserialize inputs, validate them, place those object directly into the db, retrieve them later as objects, and then serialize them again all from essentially just a schema definition. Just to name a few advantages. Teams that take velocity seriously should use ORMs. As with any library choice you need to carefully vet them though.
The "good reason" is that modern web devs do not consider SQL a core skill, and plain do not understand databases. To be a competing modern web framework you have to include an ORM so these people will consider you.
Trying to explain to a modern web dev that the optimum data storage structure is not the same as the optimum application layer data structure, so you can't just take one and map them across 1:1 to the other, is really painful.
Developing without an ORM is just as quick as developing with one (because the time you save on routine queries you will more than lose on the tricky edge cases that the ORM completely screws up on). But you need to know SQL and databases to do it.
On the other hand, ORMs insulate you from database integrity since ORMs have limited access to underlying database features.
In Postgres that usually means you're not locking rows, you're not using upsert, you might not be writing table DDL yourself. It often means you aren't even using database transactions.
While these things might be extraneous fluff for an all-nighter hackathon, you really have to figure out a sweet spot so that data integrity violations aren't killing your velocity when your service's rubber begins hitting the road.
SQL Alchemy is pretty good, because it's mostly a sql engine that has an ORM bolted on top of that, and the docs actively try to point users towards using the sql engine rather than using the ORM for everything.
Except active record can barely be considered an ORM IMO. Doing a literal one to one mapping between records and objects is not that impressive. A real data mapper ORM at least gets you true entities that are decoupled from the db. That way you could totally swap out your data layer without affecting your domain layer. Active record leads to big ball of mud architecture.
That's a tradeoff that sometimes makes sense. MICROS~1 SQL Server heavily leans into the 'use specific features extensively', and countless applications on it consist mainly of stored procedures. It does however cause a lock-in that might not be attractive, your customers might be sensitive to what database engine you run their stuff on and then you need to figure out the common ground between two or more alternatives and build your application in that space.
It's not as uncommon as one might think, one of the big products in public sector services where I live offers both SQL Server and Oracle as persistence layer so they can't push logic into stored procedures or similar techniques.
But just sketching out some schemas and booting PostgREST might be good enough for forever, if that's the case, go for it. As for ORM:s, I kind of like how Ecto in Elixir settings does things, it solves a few tedious things like validation and 'hydration', and has a macro DSL for generating SQL with concise expressions.
It's actually even worse than this, many Django applications are straight up Postgres applications. They use Postgres specific bits of the ORM without hesitation. So they're learning these weird ORM incantations instead of just learning the underlying SQL, which would be knowledge you could apply anywhere.
People just hate embedding SQL into other languages. I don't know why.
I don’t understand the hate, the only truly limiting factor for Prisma right now is its poor support for polymorphism, apart from that it has quite good support for complicated index setups, and if you need anything more performant, just drop to typed raw sql queries, it also supports views (materialized or otherwise) out of the box.
I recently wanted to check it out and wrote a small app that had good use of pgvector for embeddings, custom queries with ctes for a few complex edge cases, and it was all quite smooth.
Now it might not be at the level of active record, ecto or sqlalchemy but it was quite decent.
If you know your sql at any point it gave me options to drop down a level of abstraction, but still keep the types so as not to break the abstraction too much for the rest of the code.
I don't hate prisma - it's just a tool - but that's far from the only limiting factor.
I recently looked at migrating a legacy project with basic SQL query generation to a modern ORM. Prisma came up top of course so I tried it.
We use Postgres built-in range types. Prisma does not support these, there's no way to add the type to the ORM. You can add them using "Unsupported", but fields using that aren't available in queries using the ORM, so that's pretty useless.
It also requires a binary to run, which would require different builds for each architecture deployed to. Not a big thing but it was more annoying than just switching the ORM.
That coupled with their attitude to joins - which has truth to it, but it's also short-sighted - eliminated Prisma.
The final decision was to switch to Kysely to do the SQL building and provide type-safe results, which is working well.
Every ORM is bad. Especially the "any DB" ORMs. Because they trick you into thinking about your data patterns in terms of writing application code, instead of writing code for the database. And most of the time their features and APIs are abstracted in a way that basically means you can only use the least-common-denominator of all the database backends that they can support.
I've sworn off ORMs entirely. My application is a Postgres application first and foremost. I use PG-specific features extensively. Why would I sacrifice all the power that Postgres offers me just for some conveniences in Python, or Ruby, or whatever?
Nah. Just write the good code for your database.
I use PG with Entity Framework in .NET and at least 90% of my queries don't need any PG-specific features.
When I need something PG specific I have options like writing raw SQL queries.
Having most of my data layer in C# is fantastic for productivity and in most cases the performance compared to SQL is negligible.
Coming from Javaland to C#, Entity Framework is a breath of fresh air.
The Npgsql driver automatically applies PG-specific tricks without me having to do anything special
The only path I had to do myself is the data ingress point that had some race condition issues, everything else seems to perform pretty well out of the box.
Entity Framework really is such a time saver. The PG adapter makes it a breeze not just with common queries, but also more recent stuff, like working with embeddings for vector search.
Nah. The most prolific backend frameworks are all built on ORMs for good reason. The best ones can deserialize inputs, validate them, place those object directly into the db, retrieve them later as objects, and then serialize them again all from essentially just a schema definition. Just to name a few advantages. Teams that take velocity seriously should use ORMs. As with any library choice you need to carefully vet them though.
The "good reason" is that modern web devs do not consider SQL a core skill, and plain do not understand databases. To be a competing modern web framework you have to include an ORM so these people will consider you.
Trying to explain to a modern web dev that the optimum data storage structure is not the same as the optimum application layer data structure, so you can't just take one and map them across 1:1 to the other, is really painful.
Developing without an ORM is just as quick as developing with one (because the time you save on routine queries you will more than lose on the tricky edge cases that the ORM completely screws up on). But you need to know SQL and databases to do it.
3 replies →
ORMs are pretty much the definition of technical debt.
Sometimes debt is worth it. Sometimes the interest rate is too high.
On the other hand, ORMs insulate you from database integrity since ORMs have limited access to underlying database features.
In Postgres that usually means you're not locking rows, you're not using upsert, you might not be writing table DDL yourself. It often means you aren't even using database transactions.
While these things might be extraneous fluff for an all-nighter hackathon, you really have to figure out a sweet spot so that data integrity violations aren't killing your velocity when your service's rubber begins hitting the road.
3 replies →
SQL Alchemy is pretty good, because it's mostly a sql engine that has an ORM bolted on top of that, and the docs actively try to point users towards using the sql engine rather than using the ORM for everything.
Every ORM except Active Record is awful. Active Record is amazing.
Except active record can barely be considered an ORM IMO. Doing a literal one to one mapping between records and objects is not that impressive. A real data mapper ORM at least gets you true entities that are decoupled from the db. That way you could totally swap out your data layer without affecting your domain layer. Active record leads to big ball of mud architecture.
I moved from Rails -> Django and man my life is so painful. The Django ORM is an exercise in patience.
To be fair, Prisma's `OR` clause looks so good. Way better than ActiveRecord.
I still dream about a JS version of Rails' Active Record.
That's a tradeoff that sometimes makes sense. MICROS~1 SQL Server heavily leans into the 'use specific features extensively', and countless applications on it consist mainly of stored procedures. It does however cause a lock-in that might not be attractive, your customers might be sensitive to what database engine you run their stuff on and then you need to figure out the common ground between two or more alternatives and build your application in that space.
It's not as uncommon as one might think, one of the big products in public sector services where I live offers both SQL Server and Oracle as persistence layer so they can't push logic into stored procedures or similar techniques.
But just sketching out some schemas and booting PostgREST might be good enough for forever, if that's the case, go for it. As for ORM:s, I kind of like how Ecto in Elixir settings does things, it solves a few tedious things like validation and 'hydration', and has a macro DSL for generating SQL with concise expressions.
It's actually even worse than this, many Django applications are straight up Postgres applications. They use Postgres specific bits of the ORM without hesitation. So they're learning these weird ORM incantations instead of just learning the underlying SQL, which would be knowledge you could apply anywhere.
People just hate embedding SQL into other languages. I don't know why.
I don’t understand the hate, the only truly limiting factor for Prisma right now is its poor support for polymorphism, apart from that it has quite good support for complicated index setups, and if you need anything more performant, just drop to typed raw sql queries, it also supports views (materialized or otherwise) out of the box.
I recently wanted to check it out and wrote a small app that had good use of pgvector for embeddings, custom queries with ctes for a few complex edge cases, and it was all quite smooth.
Now it might not be at the level of active record, ecto or sqlalchemy but it was quite decent.
If you know your sql at any point it gave me options to drop down a level of abstraction, but still keep the types so as not to break the abstraction too much for the rest of the code.
I don't hate prisma - it's just a tool - but that's far from the only limiting factor.
I recently looked at migrating a legacy project with basic SQL query generation to a modern ORM. Prisma came up top of course so I tried it.
We use Postgres built-in range types. Prisma does not support these, there's no way to add the type to the ORM. You can add them using "Unsupported", but fields using that aren't available in queries using the ORM, so that's pretty useless.
It also requires a binary to run, which would require different builds for each architecture deployed to. Not a big thing but it was more annoying than just switching the ORM.
That coupled with their attitude to joins - which has truth to it, but it's also short-sighted - eliminated Prisma.
The final decision was to switch to Kysely to do the SQL building and provide type-safe results, which is working well.
Some of those criticisms are out of date.
> It also requires a binary to run, which would require different builds for each architecture deployed to.
https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...
> That coupled with their attitude to joins
https://www.prisma.io/blog/prisma-orm-now-lets-you-choose-th...
As another poster has mentioned, a thing Prisma has over the others is type safety if you use the raw SQL escape hatch for performance reasons.
1 reply →
How do you do typed raw queries?
https://www.prisma.io/docs/orm/prisma-client/using-raw-sql/r... - it assumes that these queries return arrays and there's a template you can pass in like this:
prisma.$queryraw<YourType>`SELECT * FROM ...`
2 replies →
Check out the docs https://www.prisma.io/docs/orm/prisma-client/using-raw-sql/t... it generates output types, though input types still need to be done by you via type comments in the sql