Based on table linked by @Maniero, there are at least 4 sorting algorithms which, on average, are of complexity O(n*log(n))
, namely:
- Quicksort
- Mergesort
- Timsort
- Heapsort
Copy the data to a array may be an option
To Heapsort page on Wikipedia says:
Merge Sort can be adapted to Operate on singly Linked lists with O(1) extra space. Heapsort can be adapted to Operate on doubly Linked lists with only O(1) extra space overhead.
Translating:
Mergesort can be adapted to work on singularly chained lists with extra O(1) space. Heapsort can be adapted to work on double-chained lists with an extra space increase of just O(1).
Unfortunately there is no example and no reference. How to do this? Basically you can exchange the random vector access for references to the elements of the chained list.
In the case of Heap Sort, the Wikipedia algorithm has an excerpt like this:
end ← count - 1
while end > 0 do
(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
swap(a[end], a[0])
(the heap size is reduced by one)
end ← end - 1
(the swap ruined the heap property, so restore it)
siftDown(a, 0, end)
So for example, where there is increment or decrease, just replace it with an advance or rewind on the list. Where reference is made to the first element, reference is made to the first, the same is made to the last.
Example:
procedure heapsort(first, last, count) is
input: first and last elements of a doubly linked list with length of count
heapify(first, last, count)
end ← last
while end.previous != null do
swap(end, first)
end ← end.previous
count ← count - 1
siftDown(first, end, count)
procedure heapify(first, last, count) is
start ← iParent(first, count)
while start != null do
siftDown(start, last, count - 1)
start ← start.previous
count ← count - 1
procedure siftDown(start, end, count) is
root ← start
while iLeftChild(start, root, count) ≤ end do
child ← iLeftChild(start, root, count)
swap ← root
if swap.value < child.value
swap ← child
if child.next.value ≤ end.value and swap.value < child.next.value
swap ← child.next
if swap.value = root.value
return
else
swap(root, swap)
Note that in my example above, each element (first
, last
, start
, end
, root
, swap
) is now actually a reference containing 3 attributes: value
, previous
and next
.
Still remains to implement routines iParent(start, count)
and iLeftChild(start, end, count)
.
iParent
basically you need to find the central element. This is not trivial in chained lists, but can be easily implemented since we have reference to the first element and quantity. All that needs to be done is move on floor((count-1) / 2)
times from the first element.
iLeftChild
basically advances 2*i + 1
, being i
the position of the element root
in relation to start
. So it can be easily implemented by advancing a reference count + 1
times from root
.
It works? Should, but I didn’t test. Implement at your own risk, considering that the search routines described above will slow the algorithm down a little bit, although I believe other forms of optimization can be found.
The Downvoter could signal, please, the problem you saw in the answer?
– cantoni