Comment by vagab0nd
7 hours ago
I recently started digging into databases for the first time since college, and from a novice's perspective, postgres is absolutely magical. You can throw in 10M+ rows across twenty columns, spread over five tables, add some indices, and get sub-100ms queries for virtually anything you want. If something doesn't work, you just ask it for an analysis and immediately know what index to add or how to fix your query. It blows my mind. Modern databases are miracles.
I don't mean this as a knock on you, but your comment is a bit funny to me because it has very little to do with "modern" databases.
What you're describing would probably have been equally possible with Postgres from 20 years ago, running on an average desktop PC from 20 years ago. (Or maybe even with SQLite from 20 years ago, for that matter.)
Don't get me wrong, Postgres has gotten a lot better since 2006. But most of the improvements have been in terms of more advanced query functionality, or optimizations for those advanced queries, or administration/operational features (e.g. replication, backups, security).
The article actually points out a number of things only added after 2006, such as full-text search, JSONB, etc. Twenty years ago your full-text search option is just LIKE '%keyword%'. And it would be both slower than less effective than real full-text search. It clearly wasn’t “sub-100ms queries for virtually anything you want” like GP said.
And 20 years ago people were making the exact same kinds of comments and everyone had the same reaction: yeah, MySQL has been putting numbers up like that for a decade.
20 years ago was 2006? Oh no...
Don't get me started on when the 90's were.
2 replies →
> Don't get me wrong, Postgres has gotten a lot better since 2006.
And hardware has gotten a lot better too. As TFA writes: it's 2026.
[flagged]
I am a DBA for Oracle databases, and XE can be used for free. It has the reference SQL/PSM implementation in PL/SQL. I know how to set up a physical standby, and otherwise I know how to run it.
That being said, Oracle Database SE2 is $17,500 per core pair on x86, and Enterprise is $47,500 per core pair. XE has hard limits on size and limits on active CPUs. XE also does not get patches; if there is a critical vulnerability, it might be years before an upgrade is released.
Nobody would deploy Oracle Database for new systems. You only use this for sunk costs.
Postgres itself has a manual that is 1,500 pages. There is a LOT to learn to run it well, comparable to Oracle.
For simple things, SQLite is fine. I use it as my secrecy manager.
Postgres requires a lot of reading to do the fancy things.
Postgres has a large manual not because it's overly complex to do simple things, but because it is one of the best documented and most well-written tools around, period. Every time I've had occasion to browse the manual in the last 20 years it's impressed me.
I read Jason Couchman's book for Oracle 8i certification, and passed the five exams.
They left much out, so many important things that I learned later, as I saw harmful things happening.
The very biggest thing is "nologging," the ability to commit certain transactions that are omitted from the recovery archived logs.
"You are destroying my standby database! Kyte is explicit that 'nologging' must never be used without the cooperation of the DBA! Why are you destroying the standby?"
It was SSIS, and they could never get it under control. ALTER SYSTEM FORCE LOGGING undid their ignorant presumption.
Everything you describe, relational databases have been doing for decades. It's not unique to Postgres.
My perspective might be equally naive as I've rarely had contact with databases in my professional life, but 100ms sounds like an absolutely mental timeframe (in a bad way)
what are you comparing this to btw?
"sub-100ms queries" is not a high bar to clear. Milliseconds isn't even the right measurement.
In a typical CRUD web app, any query that takes milliseconds instead of microseconds should be viewed with suspicion.
In a more charitable interpretation, maybe the parent is talking about sub-100ms total round trip time for an API call over the public internet.
But OP never said it's a CRUD app. Maybe OP did some experimentation with OLAP use cases.