> I'm sure it could be of use in educational situations.
It is a good example of how algorithm design can be done naively and algorithm analysis can be done badly, so definitely has at least some educational value.
Excluding implementation specific edge cases that would break the underlying assumptions regarding thread wakeup times, it is a O(N) time complexity sort in all (best, worst, average) cases. A miracle! Get thee to the patent office! Eat my shorts Shell!
Of course the constant time multiplier involved is so massive that it would not do better than an n(log n) or even an n*n sort in any practical situation, and this is before considering space requirements and thread processing overhead.
Unless your thread scheduler can fairly choose from linearly many threads in constant time with 0 cost context switching and your threads take 0 time to call sleep(), at large input sizes, you eventually have to make the base sleep time longer, so it's not really linear time complexity.
Mainly for the purposes of making my earlier statement correct, I consider that to be an implementation specific problem which we do not need to consider in theoretical discussions!
The OS needs to pick which thread to run from a pool of N threads - absolute best case an O(log N) operation, either when the thread is queued or when it's being selected. This happens N times, so the runtime seems like it would actually scale as O(N log N).
On the other hand, if you've got a scheduler that can pick the thread with the smallest sleep time from a list in O(1) time, you could just use whatever algorithm that's using to make an O(N) insertion sort...
But if the time multiplier used is high enough the time spend choosing a thread at each period is small.
So while that overhead could increase processing time, wall-clock time need to be affected.
Again this is why I think it makes a good example for illustrating critical thinking in the area of algorithm design and analysis. It is an obviously silly example which could actually work, within certain practical boundaries, and the many obvious problems with it are hidden in more complex processes/structures.
It is a good example of how algorithm design can be done naively and algorithm analysis can be done badly, so definitely has at least some educational value.
Excluding implementation specific edge cases that would break the underlying assumptions regarding thread wakeup times, it is a O(N) time complexity sort in all (best, worst, average) cases. A miracle! Get thee to the patent office! Eat my shorts Shell!
Of course the constant time multiplier involved is so massive that it would not do better than an n(log n) or even an n*n sort in any practical situation, and this is before considering space requirements and thread processing overhead.