← Back to context

Comment by estomagordo

4 years ago

I guess more like 30 times slower. Consider mergesort and quicksort, which both (aim to) halve the search space with every iteration. There is a log2 amount of steps. 2 is our base. And yes, base 2 and base 10 logarithms will always be within a constant factor of one another, but seeing as we _are_ talking about constant factors here to begin with...

If we are talking about billions of entries, even of just 32bit integers, we are talking about 4GB of data, your sort is probably going to be IO dominated, not CPU (comparison) dominated. In which case you'd likely use a sorted data structure with a much higher branching factor than 2 (like a B-tree or LSM tree).

Take the B-tree example, with 4KB pages, you can fit 1000 integers in there for a branching factor of 500, and a performance multiplier of just over 3.