Comment by 9rx
3 hours ago
> I don't see how anyone would design a system that executes 200 queries per page.
They call it the n+1 problem. 200 queries is the theoretically correct approach, but due to high network latency of networked DMBSes you have to hack around it. But if the overhead is low, like when using SQLite, then you would not introduce hacks in the first place.
The parent is saying that if you correctly design your application, but then move to system that requires hacks to deal with its real-world shortcomings, that you won't be prepared. Although I think that's a major overstatement. If you have correctly designed the rest of your application too, introducing the necessary hacks into a couple of isolated places is really not a big deal at all.
I'd point to the difference between vector-based vs scalar-based systems in numerics. If your web programming language is more like MATLAB or APL than PHP than maybe it can naturally generate the code to do it all with sets. As it is we are usually writing set-based implementations in scalar-based languages.
Part of the "object-relational mapping" problem has always been that SQL is superior to conventional programming languages in many ways.
Of course, the "object-relational mapping" problem is simply that of latency. In the theoretical world where latency isn't a thing, there is no such thing as the "object-relational mapping" problem. In the real world where you have something like SQLite, it isn't a practical problem either.
SQL was originally designed to run on the same machine as the user, so it was never envisioned as a problem. It wasn't until Oracle decided to slap networking protocols on top of an SQL engine did it become one. Unfortunately, they should have exposed a language more conducive to the limitations of the network, performing the mapping in the same place as the database. But, such is the life of commercial computing.
Oracle has that now, it was just several decades too late, and by that time everyone else had copied their bad ideas.