Best worst case sorting algorithm

Asked

Viewed 510 times

3

I want to know which algorithm has the best performance when dealing with its worst case compared to others dealing with each with its own worst case, all applied in array ordering.

To better understand what I want to know, go some questions that lead to the answer.

There is a consensus on which comparison sort algorithm is best compared to other arrays, each in its respective worst cases?

Being merge Sort, timsort, smootchsort, intro Sort and heap Sort complexity algorithms O(n*log(n)) of comparisons and time, which has the lowest factor of n*log(n)?

Is there another traditional algorithm that surpasses the five in the worst case? Or a widely known variant of some algorithm (such as Quick Sort, since the simple criterion of pivot selection has a lot of variety and impact) or union of multiple algorithms that overcomes (type intro Sort + Insertion Sort)?

Finally, which is the best worst-case sorting algorithm? Take into account that it is arrays ordering and not only asymptotic complexity is accepted but also the factor of this term in the formula of comparisons or experimental measures of time in large problems.

1 answer

3


We have to be careful with expectations and realities when dealing with arrays. As they are structures without so much dynamics, we find in them complications and inconveniences in implementations of algorithms to order them that, even, can seem pranks.

Coming out of arrays, the thing is much less restrictive. In them, the fast access to the data having its index is a much sought and harnessed benefit aiming good performance in several operations, but this does not guarantee low complexity of operations and has its price and inconveniences in certain circumstances, which makes it difficult to choose the algorithm.

  • Merge Sort into Lists

For example, to move nodes in chained lists we just need to insert them in a position, not manually moving everyone that is in the path as in arrays, but having that position to put in it. To apply the merge Sort, you can also handle the pointers optimally, saving and using shortcuts to key positions in the list to avoid wandering around the nodes until you reach (or, if you don’t save, scroll through the list 50% more or even double). This makes merge Sort much more suitable in chained lists than arrays.

You can scroll through the list the first time to sort each little bit of it in isolation. Meanwhile, at the cost of O(n) in the memory space you can save the starting point of each one (so end of the previous one) and then do not go through an entire piece of the list to get to the next one.

You can walk it later O(log(n)) times to merge pieces into pairs, making O(n) whole list comparisons per cycle (totalling O(n*log(n))) and when it needs to move elements simply unleashing them from where they are and chaining them to their destinations (until O(n) times per cycle, totalling O(n*log(n))).

This not only results, in the worst case, in 1+n*floor(log2(n))+n-2^floor(log2(n)+1) list comparisons (as well as arrays, summarized to n*log(n)-n+1 = O(n*log(n))) but also if well implemented with shortcuts at least n+ceil(n/2)*floor(log2(n))-2^floor(log2(n)) movements of nodes and additional that go from zero to floor(2^ceil(log2(n)-1)/3) (summed up all to n*log(n)/2+O(n) = O(n*log(n)) movements, unlike the arrays).

Note that the motions of data during mergesort never occur in such a way that there is in the middle of the path a value equal to that moved, which then holds equal values always in the same order and makes mergesort a stable algorithm.

  • Merge Sort into arrays

These movements in arrays occur with assignments of new values over the old ones in position. The cost of this drive in arrays in the merge Sort depends on whether there is enough memory to use an auxiliary array of size n and keep in the worst case the complexity O(n*log(n)) not only in comparisons but also in movements, as well as costs in chained list.

If you have no auxiliary array, no memory available... you can only shift the elements (after all there is no other place to put) and this results in complexity O(n²*log(n)) worst-case (O(log(n)) merge levels, for each level O(n) comparisons and for each comparison O(n) assignments).

It doesn’t even matter the exact number of assignments, it’s absurdly expensive, it has better options. In other words, array has this memory dependency to order with good performance using merge Sort and moving each element only once, from one array to another, in each of the ceil(log2(n)) (or 2*ceil(log2(n)/2)) array traverses.

  • Tree Sort in binary trees

Although complicated, in trees one can temporarily adapt the pointers so that each node works as a list and merge Sort to then reassemble the tree in order. But apparently the best algorithm in this case is Tree Sort. These various facilities together are hardly in array ordering.

Whenever there is new data to insert into a balanced search tree Tree Sort is ready to insert with complexity O(log(n)) flexibly treating tie as a stable sorting algorithm, thus keeping the tree always updated, ordered, normally without having to act on the whole tree and demanding as auxiliary data only the basic one that is already part of the nodes of a search tree. Even if you have not ordered the tree from the beginning, you can simply rebuild the tree correctly, insert by insertion, based on the algorithm and keep it.

If the tree is complete, the number of comparisons in the worst case is equal to the merge Sort, but Tree Sort has almost the same performance between best and worst case, which gives the merge Sort an advantage in this respect since it, from worst case to best, has almost 50% less comparisons. In addition, each insertion may require O(n) rotations throughout the tree to keep it complete, which causes the algorithm to be complex O(n²) of movements in the worst case.

The full tree height, which determines an almost proportion of comparison cost per insertion, is floor(log2(n)+1), already AVL height goes from floor(log2(n)+1) to floor(log(( n+1.5 )/( 0.5+sqrt(0.45) ))/log( sqrt(1.25)+0.5 )), then in the worst case about 1.44*log2(n)).

So if instead the tree is AVL, the insertion rotations that keep balancing occur O(log(n)) times even in the worst case, preserving the complexity O(n*log(n)) ordering not only comparisons but also movements.

However, as seen in the previous paragraph, the number of comparisons is higher than the merge Sort (despite the same complexity), in the worst case around 44% more (x1.44). So even with the huge gain he remains losing in performance.

  • Tree Sort in arrays

Anyway, as far as I know there are not widely studied ways to apply this algorithm in an array or simulate a complete tree or AVL in an array in such a way that it can be sorted with Tree Sort. Even if you find a way to do this, it doesn’t seem worth it because theoretically Tree Sort loses to merge Sort, only if there is some genius that simplifies operations and so far I haven’t found or thought of anything for it.

Curiosity:

It is possible to implement Linux with complexity O(n*log(n)) of comparisons (binary search of the correct position at each insertion). The number of comparisons in the worst case is the same as Tree full tree Sort and merge Sort. In fact, one can see the steps of comparisons as similar to that of a complete tree, it is as if Tree Sort in the aspect of comparisons.

And guess what? It doesn’t matter. The problem of having to shift to insert into arrays, moving elements into complexity remains O(n) in the worst case with each insertion, thus ensuring O(n²). Doesn’t really work like a tree in practice.

  • Insertion Sort

Speaking of Insertion Sort, let’s say the obvious: it sorts arrays and is stable but traditionally does shift, has complexity O(n²) of comparisons and movements in the worst case, so it is bad idea in big problems.

Okay, but what about the small problems? Have you ever thought of ordering several small sequences that together accumulate great computational effort? Well, it may be worth testing the performance of the Internet in cases like these, even in the worst. The Insertion Sort is considered a great algorithm for sorting short arrays. Tests with integers that I did time ago indicate that in average case it usually be good with some twenty elements and worst case, around eight.

Even with a lot of comparisons in a small problem, they are very simple instructions, fast and that take advantage of the cash. The problem is if the comparisons are of data that cost a lot, such as strings (even worse with similar initials) and structures with very repeated data and full of tiebreakers (such as vestibulandos with similar scores, similar names, in the same age groups).

In addition, Insertion Sort is widely used in combination with algorithms that break arrays into pieces that will be sorted separately and may be reasonably small.

An example is merge Sort, which can split an array of hundreds of elements into 32 girl sequences that will be sorted with Insertion Sort, then merge into 16 ordered pieces, then 8, 4, 2 and finally the ordered integer array.

Quicksort also usually leaves, on one side and the other of the pivot, smaller and smaller pieces of arrays that at some point can be so small that it is better to apply Insertion Sort. You may find a small benefit, but it is a benefit and can come in handy in several arrays of small and medium sizes, even in worse cases (since you choose very well until the array size Insertion Sort will sort).

  • Quick Sort

It is an algorithm known to be applicable in arrays and considered fast in its average cases (O(n*log(n)) comparisons and exchanges), however unstable, worse than Insertion Sort in the best case (O(n*log(n)) against O(n)) and even worse than heap Sort (and therefore merge Sort) in the worst case (O(n²) against O(n*log(n))). Also, there are no known ways to implement it without data stacking (complexity O(log(n)) memory if well implemented), either with recursive calls or "manually" with arrays.

Still, the truth is that it has flexible variant potential with distinct properties. Quick Sort follows the principle of selecting a pivot with some criterion (there being no standard but several popular ones) and separating the array into two to be then ordered, one to the left of the pivot with the values that should be left of it at the end of the order and the other to the right with those that should be on the right. Only the choice of criterion has a major impact on the result in terms of performance.

To analyze, I developed in Maple 15 the following procedure that serves to build recursive formulas with storage of reusable results to speed up the calculation and thus allow to calculate in time with relatively large values. It will be used to calculate the number of Quick Sort comparisons in the worst case from recursions.

inserir a descrição da imagem aqui

If the pivot choice criterion, even if random, is the immediate selection of one without comparing with others, the worst case case involves this value staying at one end of the array, separating one of size "n" into one of size "0" and another of size "n-1". I mean, now you have to sort an array of size "n-1". This results in the same number of comparisons as Insertion Sort in the worst case, i.e., quadratic complexity and performs more instructions.

inserir a descrição da imagem aqui

If the criterion is the classic median of three (sorts any three to know which one is in the middle), the division of the worst case is in "1" and "n-2" elements and this greatly improves performance. Using the function of the Maple, we see that apparently the speed grows from x1 in the smallest arrays until convergence to x2 bigger and bigger.

inserir a descrição da imagem aqui

If we invent to take the median of five, the worst case results in pieces of size "2" and "n-3". Speed up of x1 to x3.

inserir a descrição da imagem aqui

Average of seven? It’s "3" and "n-4". Speed up x1 to x4.

inserir a descrição da imagem aqui

In other words, to have a minimum chunk of "m" elements are ordered "p=2*m+1" values (odd amount), so "m=(p-1)/2". Note that if "p=n" then sorts the entire array to take the pivot and sort the array pieces (which are already sorted). You have to have "p<n" and "m<(n-1)/2" to apply the separation into parts and their ordering.

As a result, at a recursion level the algorithm sorts "p" elements, one of them is pivot, "m" is left of it, "m" is right, "n-p" remains to be compared with the pivot and in the worst case they all end up on one side only to then sort the two snippets (and all going to the other side of the pivot is worse case also of movement of elements). Recursively, the length array comparison cost "n" is the cost of sorting "p" plus the cost of comparing and moving "m-p" to the other side of the pivot plus the cost of sorting "m" and sorting "n-m-1" (or "m+n-p").

In addition, apparently the increase of the ordered portion in the pivot selection improves performance in the worst case of large arrays. Is it? Take the whole array for this cause called infinite recursive, of course. And if you take "n-1"?

inserir a descrição da imagem aqui

When ordering "n-1" and selecting pivot, only one more will have to be compared to the it to decide which side is on. On one side are floor((n-2)/2), of the other ceil((n-2)/2+1) ("+1" because it includes in this major what was not used to choose pivot) and, despite the division into almost equal pieces (it is almost better case), for that it needed an ordering of almost everything. The result was almost zero speed multiplier, that is, lost speed. And if you are looking for pivot in "n-2" elements?

inserir a descrição da imagem aqui

The factor is much higher than the previous one, but it remains practically zero. I mean, small arrays in search improve by increasing them and large ones improve by decreasing them, which means it must have a balance point.

Even realize that by going from one to three elements used to find the pivot there are performance improvements in the worst case of all arrays sizes and no worsening, already from 3 to 5 worsens the size arrays n=6,7,8,9,10 but ties with "n=11" and improves thereafter. From 5 to 7, there is also a size from which it is worth, but it is much bigger. Apparently the larger the array, the greater the correct amount of values to use in the pivot search.

Is it possible to find a formula for this great value? If you find and apply it to each recursion of Quick Sort, does the complexity in the worst case reduce the O(n*log(n))? Will win at least the heap Sort, who knows the merge Sort?

inserir a descrição da imagem aqui

Despite not finding an answer to all the questions, it was still possible to implement recursion that finds worst case comparisons and optimal value through a recursion of two values [ Comparações , ValorÓtimo ] and answer some of the most important.

First, note that for each quantity "p=1,3,5,..." the only one that appears once is 19, just before 17 appears several times. It doesn’t seem easy to find a formula that calculates "p", right? It seems to be a very unstable sequence.

Second, note that Speedup in "n=9999" was x58. It means that the improvement over the most basic constant algorithm "p=1" is absurdly high. On the other hand, using the merge Sort comparison formula we find x7 speed up when leaving this Quick Sort to merge Sort into "n=9999".

inserir a descrição da imagem aqui

However, it is noticed that initially the number of comparisons of Quick Sort and merge Sort are matching and moving apart, that is, until a certain moment the improved Quick Sort wins the heap Sort and from it the roles are reversed.

There are more variants of Quick Sort, for example in means of selecting the position of the stretch where it seeks the pivot, adaptation to gain stability, number of pivots, etc. But it goes from creativity, are infinite the possibilities and never ends when it is attached to it. In addition, further studies in this context do not seem to compensate in this context, since after all we can see the examples I mentioned:

  • Stretch position does not change complexity or stability,
  • this stability is gained with shift or memory allocation (same pair of options when implementing merge Sort, which is better in the worst case) and
  • most pivots are experimentally good in near-mean cases, but do not seem efficient in the worst case, as they are more pivotal to multiply comparisons and low (or no) reduction in the number of recursions.

In other words, even then it seems best to use the merge Sort if memory is available, otherwise you can opt for sorting with heap Sort or "optimal Quick Sort" depending on the size of the array. As these two algorithms are unstable, if you prioritize stability over performance even with complexity difference, you can think of the Sort Insertion.

  • Heap Sort

Unlike Tree Sort, this algorithm is also aimed at sorting arrays despite theoretically seeing in them a heap tree structure. Its complexity of operations, both data comparisons and its movements, is O(n*log(n)). Also, it has a benefit that merge Sort doesn’t have: it works without an auxiliary array. No need to allocate memory to order with maximum performance and no shift.

But it has two weaknesses. First, it is unstable. Repeatedly swap positions of pairs of elements of various parts of the vector can have between them a value equal to one of the two and thus change their order without reordering them. Second, the number of array value comparisons in the worst case of heap Sort is around twice the worst case of merge Sort.

Thus, it apparently loses to the previous algorithms, but at least it serves as arrays and does not need an auxiliary vector. Until then, it seems that the way is to use merge Sort if you can and, if you cannot allocate the auxiliary array, unfortunately you choose between stability (Insertion Sort, whether it is "binary" or not in comparisons) and performance in large arrays (heap Sort). Of course, with more data (associating each element with the initial index) one can give stability to the heap Sort, but this comparison tiebreaker and data movement cost.

Curiosity:

Algorithms like this tend to each element have a cost of operations, such as comparisons, in proportion to log2(n) depth levels. In the case of heapsort, often root comparisons with children, up to two comparisons, 2*log2(n). It simulates a binary heap tree.

What if I used a ternary tree? Or quaternary? N-aria?

If the proportions are N*logN(n), realize that the minimum would be in N=e=2.718 (impossible), the case of N=2 would in the same way N=4 and possibly N=3 would be the best.

From binary to ternary, an increase of one comparison per level with a 36.9% depth reduction would be approximately 3*log3(n) = 1.893*log2(n), a reduction of 5.36% in the number of comparisons from binary to ternary. As it would be in practice?

And one could simplify the N-arias more to find something even better than 1.893*log2(n)?

  • Timsort

There’s not much to say.

  • It is aimed at "reality problems", not worse cases.
  • It is based on the merge Sort.
  • Apply to each of the smallest pieces that will be attached an adapted Sort Insertion, without reducing comparisons in the worst case.
  • To make matters worse, the official implementation unbalances the chunk sizes that will be combined, worsening the performance in the merge step Sort in the worst case.

In short, in terms of worst case it is a worse Sort merge. Adaptations that make the algorithm a middle ground between merge Sort and timsort can create an algorithm with costs equal to those of merge Sort in the worst case and better than its in medium and better cases.

  • Concluding Remarks

It is notorious the difference in performance of an sorting algorithm not only between the various ways of implementing it and in the same type of ordeable structure but also of the versions in different types of structures. There are unique algorithms of certain types, others serve for several but are more suitable for one than the other. There are even cases of heavier structures that, even so, an algorithm is better at them than others because they offer resources that make them less complex. There is a vastness of facts to consider.

With the use of pointers, one can structure data so that one can enter them, access them, move them and remove them in the most particular ways, beyond copying values saved there and overwriting the old ones to simulate data movement (as is done in arrays). While in arrays we read and write when locating with index, other structures have favors as ways of searching for a given and in the vicinity of it the insertion of another. This certainly impacts the performance of an sorting algorithm, after all an expensive shift in an array is not necessary in a list for example, only chaining with pointers solves. Finding a data in an index instantly is better than in order O(log(n)), but inserting it between two others is simpler in a tree. So on...

Because of this, we can’t talk as if the algorithms will always have the performance results we think they will. What is good in one situation may not be good in another. Attention is needed to each case. We can not only do an Insertion Sort shift elements, which naturally happens to need to shift several elements to insert one, but we can also have the idea to do this in merge Sort, Quick Sort and other algorithms (which were not made for this) to deploy artificial stability, lack of need for memory allocation and other things that ultimately impair performance even while maintaining the complexity of comparisons. Care must be taken not to fall into such traps and be surprised later. The main costs are of instructions in general and time, not comparisons as if popularized.

Without certain benefits, choosing an algorithm for arrays is difficult. Sorting may require space allocation (as traditionally done, for example, in array merge Sort and create memory demand), value shift (as traditionally done in Sort isertion and creates performance issues in long shifts), value swap (as traditionally done in quicksort and destroys the stability of the ordering), not having stability, not having the desired complexity, all by not having resources that can suit and thus always finding some loss to think.

  • Verdict

How to define a better algorithm in worse cases?

Let’s start with the simplest. Small array? Insertion Sort. Simple data type comparisons? Better yet.

And large volumes of data? It may be appropriate to make mixtures of algorithms, such as an algorithm that sorts small pieces with Insertion Sort, medium with "Quick Sort great" and large with heap Sort or merge Sort, each with possible adaptations.

The main algorithm may be the one that creates the divisions or completely sorts a large array but not medium and small arrays. The intermediate algorithm can further break the array or piece of it that it has to sort, the smaller algorithm sorts the smaller pieces and if after that the intermediate or main has more to do (like merge, which makes the largest merge at the end) it picks up these ordered pieces to complete the service with the strategy that further reduces complexity.

But to choose algorithms we need to look at their properties in array ordering and evaluate conditions. As we noted earlier, the only defect of the merge Sort is the demand for memory. If you have space, it is a good use. If you have massive data structures already occupying much of the memory, you may need to give up the benefits of this algorithm.

If you prioritize performance without risking the worst cases, heap Sort does the job without the demand for memory. If you take a little risk, Quick Sort is considered a good idea. Perhaps the best thing to do when thinking of worse cases is in case of large array apply heap Sort (total or large part and leave the rest with Insertion Sort) and in case of not so large array apply "Quick Sort great", perhaps refining into small pieces to sort with Sort Insertion.

I don’t know a stable, reasonable performance algorithm that saves memory.

I still have to look at the smootchsort.

CONTINUES

Browser other questions tagged

You are not signed in. Login or sign up in order to post.