Comment by xorglorb
15 years ago
It's actually an interesting solution. If you reduced the time waited to a microsecond, then it would take about a second to sort 1M elements. Definitely not an optimal solution, and has a bit of potential for race conditions (if one thread/process got hung up on I/O), but interesting none the less.
At that resolution a modern OS is likely to round up the delay to some energy-saving number, and the run queue is going to be so contended that even if the timers fired at the right time, there would be little guarantee the processes awoke in the desired order.
I think on Linux the granularity thing is true for e.g. nanosleep(), but there is also the POSIX clock_*() functions, which I think guarantee higher resolution.
higher resolution, but I don't think precision (especially in the face of I/O)
Don't forget the setup time, and the need to spawn one million threads, and then have them all wait until you've spawned the last one before they start their timers.....
_lots_ of potential for race conditions.
Easy to solve, just start them in order.
But then it's already sorted...
1 reply →
As long as all the threads are running before the first one finishes... You would have to measure the speed of spawning new threads in your runtime, and adjust the linear timestep dt * x (where x is the element) for that, e.g. for 2 1 1 1 1 1 1 1 1 1 0 you would have to make the waiting time for 2 be 2 * dt + epsilon, where epsilon is a function of the elements position in the list to be sorted.
Even if you did that perfectly, it is still only a probabilistic sorting algorithm, with potential for error.
The problem, I think, is that the running time of the algorithm is measured not by the number of inputs, but by the size of the largest input—so, as Estragon (http://news.ycombinator.com/item?id=2657959) (transitively) points out, there's not an appreciable difference between sorting `(1..10^3, 10^6)` and `(1, 10^6)`.