Sort chained list with O(n*log(n) method)

Asked

Viewed 1,081 times

7

I need to sort a chained list (own implementation, without using Java API) with a sorting method that has O(n*log(n) complexity. Researching methods that satisfy the condition found the quicksort, however, it presupposes random access to the elements that will be ordered, which is unviable in the case of the list.
Thus, I had the idea to copy all the elements of my list to an array, perform the ordering with quicksort and copy them back to the list after ordered. Is this solution viable? Is it able to maintain the desired complexity? In addition, there are other methods with this same complexity that fit more to the chained list?

3 answers

3

Based on table linked by @Maniero, there are at least 4 sorting algorithms which, on average, are of complexity O(n*log(n)), namely:

  • Quicksort
  • Mergesort
  • Timsort
  • Heapsort

Copy the data to a array may be an option

To Heapsort page on Wikipedia says:

Merge Sort can be adapted to Operate on singly Linked lists with O(1) extra space. Heapsort can be adapted to Operate on doubly Linked lists with only O(1) extra space overhead.

Translating:

Mergesort can be adapted to work on singularly chained lists with extra O(1) space. Heapsort can be adapted to work on double-chained lists with an extra space increase of just O(1).

Unfortunately there is no example and no reference. How to do this? Basically you can exchange the random vector access for references to the elements of the chained list.

In the case of Heap Sort, the Wikipedia algorithm has an excerpt like this:

end ← count - 1
while end > 0 do
    (a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
    swap(a[end], a[0])
    (the heap size is reduced by one)
    end ← end - 1
    (the swap ruined the heap property, so restore it)
    siftDown(a, 0, end)

So for example, where there is increment or decrease, just replace it with an advance or rewind on the list. Where reference is made to the first element, reference is made to the first, the same is made to the last.

Example:

procedure heapsort(first, last, count) is
    input: first and last elements of a doubly linked list with length of count

    heapify(first, last, count)
    end ← last
    while end.previous != null do
        swap(end, first)
        end ← end.previous
        count ← count - 1
        siftDown(first, end, count)


procedure heapify(first, last, count) is
    start ← iParent(first, count)
    while start != null do
        siftDown(start, last, count - 1)
        start ← start.previous
        count ← count - 1


procedure siftDown(start, end, count) is
    root ← start
    while iLeftChild(start, root, count) ≤ end do 
        child ← iLeftChild(start, root, count)
        swap ← root     
        if swap.value < child.value
            swap ← child
        if child.next.value ≤ end.value and swap.value < child.next.value
            swap ← child.next
        if swap.value = root.value
            return
        else
            swap(root, swap)

Note that in my example above, each element (first, last, start, end, root, swap) is now actually a reference containing 3 attributes: value, previous and next.

Still remains to implement routines iParent(start, count) and iLeftChild(start, end, count).

  • iParent basically you need to find the central element. This is not trivial in chained lists, but can be easily implemented since we have reference to the first element and quantity. All that needs to be done is move on floor((count-1) / 2) times from the first element.
  • iLeftChild basically advances 2*i + 1, being i the position of the element root in relation to start. So it can be easily implemented by advancing a reference count + 1 times from root.

It works? Should, but I didn’t test. Implement at your own risk, considering that the search routines described above will slow the algorithm down a little bit, although I believe other forms of optimization can be found.

3

The question is something that can guarantee complexity, does not talk about anything related to speed, are different concepts.

Chained list with this complexity is hard to find, only quicksort does not achieve this complexity. Copying first only makes the situation worse.

Copy to another structure can give more speed, but not give this complexity.

It seems that someone managed to do with mergesort, but without copying anything before :) answer accepted in question in OS speaks precisely of this solution.

The mergesort, as well as the heapsort, can give the required complexity in the question. These are two well-known algorithms, of course it depends somewhat on the implementation. The problem with these two algorithms is that it consumes a lot of space - O(N). The question does not speak anything in space, but specifically because it is a linked list, these algorithms get O(1).

Take a look at the Radix, he usually works miracles, but it’s hard to implement.

This table can help even if it is not specific to Linked list, the algorithm properly implemented can ensure the required complexity.

0

Copy might not be a bad idea.

O(n*log n + n) for a n sufficiently large can be considered the same as O(n*log(n)).

Note, however, that Big-O notation is a very important direction, but not definitive at the time of practical analysis of algorithms. There are several details of optimization of hardware which are not taken into account in a purely Big-O analysis.

  • The Downvoter could signal, please, the problem you saw in the answer?

Browser other questions tagged

You are not signed in. Login or sign up in order to post.