Comment by lostcolony
4 years ago
...I fully plan to use "O(whatever)". Not sure for what.
But, yes. (naive) Quicksort's amortized complexity being O(nlogn), but its O(n^2) on already sorted data, is all I ever needed to learn to take away that lesson. When sorting already sorted data is worse than sorting randomized data, it's a quick realization that "amortized cost" = "read the fine print".
Quicksort as O(n log n) is not amortized complexity, but average runtime for random data.
Interestingly, it is quite easy to double the speed of Quicksort, even at this late date, with just a couple-line change.
<http://cantrip.org/sortfast.html>
Anyway I thought it was interesting.
Something that is amortized complexity:
Most of the time, it's O(1). Sometimes it's O(n). If you double the size of the backing array when it runs out of space, it's amortized O(1).
Or triple, or quadruple. Or even (IIRC) "increase by 50%" (but, I would need to sit down and do the actual math on that). But, doubling a number is cheap and more conservative than quadrupling (the next "cheap" multiplier).
Fair point; I'm confusing my terminology. Analogy and realization still holds.
Also, already sorted data.. in reverse order. If it's already sorted in the right order, quicksort takes linear time. This is an important difference - data you use might indeed often be appropriately sorted, but in practice will seldom be sorted in reverse order.
5 replies →
I (sadly) have to resort to use "O(scary)" too often for my taste.
From: https://stackoverflow.com/a/185576/368409