The unreasonable effectiveness of modern sort algorithms

6 months ago (github.com)

There is one sentence I really took out from the years at university, it was at a database implementation course:

> If you have a trouble solving some problem, see if sorting the data first helps.

I feel that sorting data is the ultimate computer science hack. Many, many, classes of problems turn into O(log n) problems, once you sort your input in some way. It might not be the most effective way of solving the problem, but it's often a fairly good one.

So I'm really enjoying how good sorting algorithms are getting and how despite the O complexity remains mostly the same, the real computing efficiency is improving significantly.

  • Just be careful you aren't doing the classic, "my linear regression works way better when I independently sort the inputs and targets"!

    • But this one weird trick is crazy effective, if you value chart aesthetics over meaningfulness, which I do. My charts look great.

  • Also, "shove it in a dictionary" works frequently too.... basically "organize the data in some way" is often the answer.

  • Similar to you, based on years with databases I saw sorting as a huge advantage and often performed this step as part of optimizing any data access. I've tended to see the same pattern of problems over the last 15 years. Imagine my surprise when I read a blog post that showed not perfectly sorting your data could often result in faster overall result time for a wider range of queries. Duckdb: https://duckdb.org/2025/06/06/advanced-sorting-for-fast-sele... continues to surprise me with novel improved approaches to problems I've worked on for years.

  • > Many, many, classes of problems turn into O(log n) problems, once you sort your input in some way. It might not be the most effective way of solving the problem, but it's often a fairly good one.

    As a corollary, you can use binary search+linear interpolation as a fast way to approximate a strictly monotonic function, knowing its reciprocal, over a limited sets of outputs (e.g. n -> floor(255 * n^(1/2.2) + 0.5) for n in 0..255). You also get to work on bit_cast<s32>(...) as an optimization when the targeted FPU doesn't have conditional move instructions.

  • I spent many years as a programmer somehow avoiding ever doing much with databases, since most problems that seemed to want databases could instead be solved using sorting batch-collected data.

  • Side note: this works great for fermions as well…

    To save the densest among you the oh so tedious task of extrapolation here are a some anecdotal examples.

    Dishwasher: Sort your dishes when you pull them out. Cut your walking time to the minimal possible steps, think of this like accessing cache data. Enjoy the now painless benefit of constantly clean dishes and kitchen.

    Short story: Some friends had a geodesic dome company. I’d get brought out to do setup when they were really busy. These Domes had a lot of heavy metal pipes of different and specific lengths. Its hot, it’s outside, its heavy and tedious… pipes would invariably be in a giant pile on a pallet… caught another friend doing bubble sort… the 56’ dome went up in record time with the two of us.

    More meta-hierarchical: End of life, parents on cusp of hoarding. Trauma combined with sentimentality manifests as projecting value on to items. Items pile up, life becomes unmanageable, think of this like running out of ram. Solution: be present, calm and patient. Help sort the memories and slowly help sort the items. Word of warning this is NP-hard-af… eventually they will get out of it and the life for them and you will dramatically improve.

    Further reading: https://knolling.org/what-is-knolling

    • Just be careful. Often there are goals other than efficiency. If you get the dishwasher unloaded while cooking - but burn the meal that was a loss. you might be able to do something else while unloading the dishwasher if your sort order is less effient. Or sometimes sorting is less efficent as there isn't enough to do as to make up for the overhead of sorting.

      3 replies →

  • On a side note, some languages still refer to computers as ‘sorting machines’ or just ‘sorters’

  • I find that my best bet is to always add a B-Tree. Indexes are the best. Searching on a sorted column is nice, but nowhere near as fast as a nice B-Tree.

  • ... and it many cases comes at a very low cost as quite often an index enabling to satisfy the usual "quickly find something" standard need exists, and most of them let us immediately obtain a sorted list.

I find in practice that if the sorting process is too slow, you should begin thinking about different ways to attack the problem. Maintaining a total global order of things tends to only get more expensive over time as the scope of your idea/product/business expands. The computational complexity of the sort algorithm is irrelevant once we get into memory utilization.

This is why we have things like tournament selection. Randomly sampling from the population and running tournaments is way more scalable than scanning and ordering a global list each iteration. You can maintain things like an ELO score with very narrow views into memory. Nothing needs a global view yet you get global effects.

  • Could you give an example of reframing a problem from totally ordered complete data to a sampled tournament? I can imagine cases amenable to sampling, but since sampled data is smaller I'm not sure why I wouldn't just sort it.

    • Sorting doesn't yield an ELO score for each item. Though tournament is a form of sorting. It's like instrumented sorting.

  • I don't yet see how tournament selection could work here, could you explain?

    Sometimes when you think you need to maintain a sorted array under item insertion, it turns out that you only ever need to continually read the next-smallest (or next-largest) item -- and in that case, it suffices to maintain a heap, which is much cheaper.

    • An example would be an evolutionary algorithm that relies on a large population to maintain diversity. As you get into 6-7 figure population size, ranking the whole set starts to take a really long time (relatively speaking) each iteration. This also requires a serialized phase of processing that halts all workers for the duration.

      With tournament selection, you can randomly pick indexes from the population to build tournaments, which is effectively instant. There is no more requirement for a serialized processing phase. All processors can build their own random tournaments and perform updates of scores. There will be occasional conflict on score updates but the idea is that with enough samples/iterations it becomes very accurate.

      Another example: https://danluu.com/2choices-eviction/

Double jaw-drop. First when the (dynamic) match was slower than the hash map, second when sort_unstable was faster than the hash map!

Cool article. It's clear that all my theoretical algorithm-knowledge comes short when faced with real CPUs.

  • Chromium recommends to use flat_map, a map-like interface based on a sorted array, for data structures facing GUI or similar when the number of items in the map is naturally bounded. It is faster and more compact compared with hash maps.

The efforts of developing better sorting algorithms like driftsort/ipnsort and better hash functions like foldhash make my life as developer so much simpler. No matter how clever I try to be, most often just using foldhash hashmap or a sort_unstable is the fastest option

Neat.

Adaptive radix sorts exist, where the keyspace is divided into roughly equal sized buckets based on the distribution of the data. The setup is slow enough that this is usually used only for very large sorts that have to go out to disk, or, originally, tape.

It's the first patented algorithm, SyncSort.

Your "Branchless" approach could indeed be implemented very efficiently in a CPU with AVX2 (256-bit-wide vectors). With the current element in rax, the 4 valid values in ymm2, and the 4 running totals in ymm3 (initially zero), the inner loop would be just:

    VPBROADCASTQ rax,ymm1
    VPCMPEQQ ymm1,ymm2,ymm1
    VPADDQ ymm1,ymm3,ymm3

VPBROADCASTQ copies rax into each of the 4 lanes in ymm1. The VPCMPEQQ sets each qword there to all-0 ( = 0) or all-1 ( = -1) depending on the comparison result, so the VPADDQ will accumulate 4 running negative totals into ymm3, which can be negated afterwards.

I would still expect the perfect hash function approach to be faster, though -- a similar number of operations, but 25% of the memory movement.

  • “Memory movement”? None of the instructions you list involve memory.

    I find the perfect hash implementation a bit weird; it seems to obfuscate that you simply look at the lowest two bits (since they differ between the four values). You can do the x + 3 and 3 - expr at the very end, once, instead of for every element.

    • Doing the phf as shown is an and + neg instruction and just doing % 4 is just the and. I tested it on a Apple M1 machine and saw no difference in performance at all. It's possible to go much faster with vectorization 3x on the Zen 3 machine.

      1 reply →

Wouldn't it make sense to test radix sort? You could do it in one pass with 2 bits and it would degrade gracefully as the number of bits increased. A MSB bucketing followed by LSB passes would take care of the 5% random data case with good efficiency.

  • radsort a radix sort is present in the comparison results.

    • hmmm, the performance suggests it's not a particularly good implementation of one. (performance at sizes that fit in the cache being lower than performance that exceed it...)

      Also the statement in the text is just false ("Radsort is a radix sort and as such unable to adapt to patterns in the data")

      MSB absolutely can adjust quite easily. Even LSB can pay some attention. And hybrid like I suggested (use MSB to bucket based on high bits and then LSB) absolutely can...

      1 reply →

The scenario presented seems very odd. Why would you want to sort 10^7 items that are known to contain only four distinct values? It seems much more likely you would be counting the number of times each value appears, or selecting all of the elements of value X.

  • I believe the purpose of choosing such an odd scenario is to show that, while you might think that you can beat the generic sort algos with a more domain-specific implementation, you might be wrong, or you might not gain enough performance to make it worth the other pitfalls of using such algos

Isn't this just another case of premature optimization? Shouldn't you be adjusting sorting algorithms only when customer complains?

  • I think the article basically had this conclusion. Think twice before optimising here because you may be able to squeeze something out for a very limited scenario but it can have ugly failure modes and it end up being slower in some cases. Plus it takes time and effort. And modern standard sorts are "unreasonably" fast anyway for many practical purposes.

    Then again only thinking of fixing things when a customer complains is a way to end up with a leaning tower of hacks which eventually ossify and also the customer (or rather the users, who may not be the customer especially in business software) may be putting up with dozens of niggles and annoyances before they bother to actually report one bug because they can't work around it.

  • It only makes sense to talk about premature optimization in the context of building a production system (or a standard library).

    This is research or experimentation, designed to improve our understanding of the behavior of algorithms. Calling it premature optimization makes no sense.

Efficiency, not effectiveness. They are all effective in the sense that they produce sorted results. Even the non-modern sort algorithms are effective in the sense that the results are correct. This should be about the efficiency with which they do it, right?

  • From TFA:

    The title is an homage to Eugene Wigner's 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".

  • Agreed. Effectiveness would imply that some algorithms are more likely to sort the list correctly than others, or they sort a higher percentage of elements. Efficiency is about factors external to the correctness

  • "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is one of those titles that gets imitated a lot for some reason. Maybe even more than "Goto Considered Harmful".