From what I know its origin is "supposedly" unknown because the author is not known. Proceeds?
Matheus, if I’m not mistaken this algorithm has its first mention in a job interview report on Google with a candidate who did not have his name revealed (http://gafter.blogspot.com/2006/11/linear-time-sort-puzzler.html). This story may be false or perhaps previous mentions have occurred and we have not found it. There is not much materiality.
Incidentally this sort algorithm is rarely cited in comparison to the others (Quick-Sort, Bubble-Sort, etc...). It is a matter of popularity or provenance?
It is a matter of quality. Perhaps also a matter of didactics. Other algorithms are better suited to study and learn algorithms, but this in my view is nothing more than a curiosity in the field of computing.
The idea is creative, but who uses it for real? I’ve never seen.
Generally, better algorithms are used, such as Insertion Sort (good at ordering small sequences), Quick Sort (controversies due to worst case), merge Sort (controversies due to medium and better cases, as well as additional data storage), and Tree Sort (most used in building balanced search trees, not as sort algorithm).
Usually those who want high-performance sorting algorithms make use of mixtures of known algorithms.
For example, if the ordered sequence is small, we use Sort Insertion. If it is not so small as to compensate, quicksort iterations are used to break the excerpts into smaller subtrees until Sort Insertion in these subtrees are worth it. If the sequence is too large to risk behavior in the worst-case environment, heapsort is used until you are willing to risk applying quicksort with Insertion Sort.
And mainly as quoted in the title, what is the algorithmic complexity?
It depends on what you mean by complexity. The term generically refers to algorithm difficulties (https://en.wikipedia.org/wiki/Computational_complexity), or better, costs of running it which can be in number of operations, runtime, number of loop cycles and even demand for resources as stored data (ie required memory).
Type, when analyzing sorting algorithms by comparisons, one normally takes into account number of comparisons and amount of additional data. For example, merge Sort has complexity of number of comparisons O(n*log(n)) in better and worse cases (therefore also average), its additional data have complexity dependent on how it is implemented and what type of structure, and may be O(1) (in lists, for example) or O(n) (in arrays when the algorithm does not "shift" the elements), yet the time complexity may be equal to or worse than that of comparisons depending on how one implements.
In this case, Sleep Sort is not a comparison sort algorithm. The criteria then are different. The thread number complexity is O(n) because the number of threads is equal to the length of the sequence. The complexity of instructions is also linear because each element has a loop iteration with O(1) instructions calling a thread and later the thread runs also with O(1) instructions. The memory complexity demanded is probably linear because it is proportional to the number of threads. Anyway, numerous linear criteria can be thought of, but I’ll assume that, in fact, you want to know the complexity of runtime.
Disregarding execution time of instructions and overheads, the largest element of the sequence is worth the time in milliseconds expected to have the complete result, that is to say O(n) where "n" is not the number of elements but the value of the maximum. But not only does Leep’s waiting define true execution time. The array traverses, the threads calls, their executions, the competition between them for processing capabilities, the console buffer upgrades and everything else also take time and depend on the operating system and architecture, all of this can impact on real complexity. Moreover, this algorithm was apparently designed for small sequences, after all one can imagine numerous problems in calling one thread per element when they are very numerous. In short, there is no time complexity.
Still, if you want to force the bar you can consider the worst case: nonincreasing sequence (decreases or conserves in each adjacency) of large elements that vary between them in such a way that there is the concentration of all the calls of threads at the same instant and without parallelizing the threads, but without more limitations such as memory, possible errors of results, etc. What will happen?
First, O(T) complexity loop with "T" equal to the size of the sequence. Then, wait for complexity O(M) with "M" equal to the last element, since the processor’s wait starts when creating its thread and ends with all threads having to be executed along with this, which slept "M" milliseconds. At the end of wait, competition treatment of unknown complexity O(D) with unknown "D" and threads execution with complexity O(T).
So? What does it do? Linear time with T more linear time with M more linear time with D more linear time with T. What is this complexity? If it is not to consider one variable as a function of another, it is O(T+M+D). If it is to consider that everything is proportional, it is O(T) (or if you prefer O(M) or O(D)). If "M" is considered to be quadratic in relation to "T" and cubic in relation to "D", it is O(T) or O(M²) or O(D³). If we disregard "D" and assume that the values of the sequence are limited, type ranging from 0 to 2³²-1 or from 0 to 2 -1, we have O(D) and O(M) both equal to O(1) (however big the absurdity of waiting 2³ms²) and therefore the complexity of the algorithm would be O(T). Henceforth...
And what’s different from the version "traditional" and "cancellable"?
I couldn’t find sources addressing it, but I found discussions talking about aborting threads in Sleep. I suppose that cancellable Sleep Sort is the one that is implemented with the possibility to stop all ordering during the process, even if you have to do this with each thread. The traditional, as we can see in the code examples, are just a loop code creating thread and giving Sleep and another with the impression of value when finishing Sleep, with nothing that allows the whole sorting process to cease.
I didn’t know this algorithm, but I found it kind of anecdotal. The way it works doesn’t seem at all efficient
– bfavaretto
Is there something missing in my answer? Maybe I can complete it if you tell me.
– RHER WOLF
Yes, the last question concerning the "cancellable" or interruptive version and the traditional!!
– Matheus C. França
And you have sources available talking about it? I can’t find them anywhere. It’s almost as if they invented this expression informally and stayed for it, which I wouldn’t be surprised to hear about algorithms like this. I think my answer is adequate, it simply implies that it is an abortible algorithm despite the difficulties of stopping each thread.
– RHER WOLF
I have already commented on your answer below, with sources!!
– Matheus C. França