Comment by kabdib
15 years ago
One of my less well received interview questions is: "Okay, we both know theoretical lower limits on sorting. Can you come up with a terminating algorithm that is most /pessimal/?"
Usually I get a blank stare. Sometimes I get a great answer.
If we're talking about actual sorting algorithms that aren't pathologically inefficient (i.e. if we're not allowing algorithms that deliberately make negative progress or do random stuff instead of trying to find the solution) I'd think the "pessimal" (which, TBH, I'm not 100% clear on the definition of, whether we're talking worst case runtime, worst best-case runtime, average, etc.) deterministic solution would have to be essentially to enumerate the permutation group of N elements, checking each time whether the permutation creates a sorted list or not. There are factorial(N) members in this group, which is pretty bad worst-case running time for a sort.
There are many ways to enumerate permutations, and we could probably pick a terrible one there, too, especially if we use a really naive implementation without any memoization or anything like that we could probably waste all sorts of time.
Bonus points for freshly generating the whole list of permutations fresh for each isSorted(applyPermutation(i,list)) test.
I think to do much worse you'd probably have to start cheating, throwing in garbage calculations that are merely used to waste time, not steps that anyone would actually think of using to progress towards the solution (i.e. I could certainly dream up a nasty brute-force problem to solve for the index integers used in the inner loop, but that's in the category of deliberate time wasting) - the nice thing about this solution is that it's 100% conceivable that someone that wasn't thinking straight could come up with it and implement it just like this. Hell, a poorly written Prolog would probably come close to implementing this algorithm if the sorting problem was specified in the right (wrong?) way...
There's also always SlowSort: http://c2.com/cgi/wiki?SlowSort, which works by finding the maximum element, then recursively sorting the rest of the list. That wouldn't be so terrible if it weren't for the twist: SlowSort finds each maximum element is by splitting the list into half, sorting both lists (using SlowSort, of course) to read off the maxima, and then picking the bigger one. I think this still runs faster than N!, but its runtime is guaranteed, whereas when generating permutations you might accidentally hit the right one in the first step (could always work around that by calculating all the lists first and then checking them, but again, that seems like cheating).
I think you get the blank stares because half the interviewees don't know what "pessimal" means.
Cmd+Ctrl+d: no entries found
I guess it's a mix of optimal and pessimistic. Pessimistically optimal ("This may be optimal, but I doubt it") or Optimally pessimistic ("This is absolutely the worst possibly outcome ever") :)
The latter:
https://secure.wikimedia.org/wiktionary/en/wiki/pessimal
The other time I've seen a similar derivation used is in "The Story of Mel"
'Mel called the maximum time-delay locations the "most pessimum". '
We're clearly not talking about comparison sorts here, which blows the options wide open. O(keyspace) sorts (especially a naive counting-sort) are truly terrible for short lists with large keyspaces.
Absolute worst non-probabilistic for large inputs? Enumerate all possible lists given the keyspace, check if it contains the same values as the input, check if it's sorted. This is O((k^n)n!) worst-case (and I think average as well), where k is the size of the keyspace and n is the size of the input. Even if you take the obvious optimizations (enumerate only sorted lists, enumerate only lists of the correct length, etc) it's still terrible. The former improves it substantially, but not enough to finish before the heat-death of the universe on a substantial input size. The latter has no effect on the complexity.
edit: Corrected the complexity. It's even worse.
Bogobogosort (http://www.dangermouse.net/esoteric/bogobogosort.html) would be a good candidate except that it uses random permutation and therefore might not terminate. Maybe if you replaced the random step with a deterministic one that always yielded a not-yet-seen permutation...
I always like the "shaking box" algorithm:
1. Check if list ordered. If it is, we're done!
2. Randomize order of list. Go back to step 1.
N!
Worst case O(\inf), it's known as 'random sort' or 'bogosort'.
Funny thing, it can theoretically be O(1) on quantum computers, assuming the multiple-universe interpretation of quantum mechanics: http://en.wikipedia.org/wiki/Bogosort#Quantum_bogosort ;).
My favorite presentation (http://c2.com/cgi/wiki?QuantumBogoSort) goes approximately like this:
1. Randomize the list. 2. If the list is not sorted, destroy the universe.
Is randomizing a list of n entities once is an O(n) operation. not sure how it would be done in O(1) on quantum computers.
1 reply →
This is usually known as bogosort. http://en.wikipedia.org/wiki/Bogosort
This is more commonly known as bogosort (http://en.wikipedia.org/wiki/Bogosort), and it's a good start for running time but isn't guaranteed to terminate. See my other comment for a link to something that goes one better than bogosort on running time.
If the shuffle is truly random eventually you will find that the list is sorted and the bogosort will terminate. If your shuffle is not truly random then it may likely never terminate.
4 replies →
Start by encoding each item to sort into an expression consisting of factors of prime numbers so that it's reversible. Conway chain notation of the factors would make a nice pessimistic next step, though that might violate your termination requirement as there won't be enough Planck volumes in the known universe to represent anything beyond the first few terms. :)
The word to google is "simplexity", though that gets used for other less amusing things too, so adding the word "sort" helps. Two examples:
http://web.cs.wpi.edu/~dd/2223/broder_pessimal_algorithms.pd...
http://www.hermann-gruber.com/data/fun07-final.pdf
I don't like this question. Comparison- or value-based? Worst in the average or the best case? What's to stop me from mapping the inputs into, say, busy beaver space?
Just the fact that you object and have something interesting to say puts you ahead of ninety percent of the candidates I see.
[One guy, no foolin', told a cow-orker that a byte, on an Intel platform, contained four bits. Wow.]
In an actual interview situation, I'd be likely to just choke, because it's a question I can't get a good handle on. I hope you prod & probe when asking that question.
Ask a silly question, get a silly answer: