Comment by nicoburns
3 years ago
Yeah, the at the billions of rows mark it definitely makes sense to start looking at splitting things up. On the other hand, the company I worked for split things up from the start, and when I joined - 4 years down the line - their biggest table had something like 50k rows, but their query performance was awful (tens of seconds in cases) because the data was so spread out.
Am I missing something?
2 ^ 30 is over 1 billion. So a properly indexed table with 1 billion rows will take a similar effort to search as a 30 row unindexed table.
Or are there other factors coming into play that I haven't thought of.
A thirty row table fits into the CPU cache, a 1 billion row table doesn't.