1
void bubblesort(int *A, int n) {
int i, aux;
for(i=0; i<n-1; i++){
if(A[i]>A[i+1]){
aux = A[i];
A[i] = A[i+1];
A[i+1] = aux;
}
}
if(n-1>1)
bubblesort(A, n-1);
}
void bubblesortI(int *A, int n) {
int i, aux, j;
int troca=0; //1
for(i=0; i<n-1; i++){ //Pìor C. = 2n Melhor C. = 1;
for(j=0;j<n-i-1; j++){ //Pior C. = n²+n-2 Melhor C. = 2n
if(A[j]>A[j+1]){ //Pior C. = (n²+n)/2 Melhor C. = n-1
aux = A[j]; //Pior C. = (n²+n)/2 Melhor C. = 0
A[j] = A[j+1]; //Pior C. = (n²+n)/2 Melhor C. = 0
A[j+1] = aux; //Pior C. = (n²+n)/2 Melhor C. = 0
troca =1; //Pior C. = (n²+n)/2 Melhor C. = 0
}
}
if(!troca) //Pior C. = n-1 Melhor C. = 1;
break;
}
//Pior caso: O(n²) Melhor caso: O(n)
}
I’m studying ordering and complexity and ended up doing the recursive and iterative bubblesort code and wanted to define their order, only I can’t understand how to calculate the complexity of recursive codes.
In front of the iterative code is the cost of each line.
PS: If someone has a recursive algorithm complexity and cost calculation tip, I’ll be happy.
The recursive includes a dynamic memory cost that is not predicted in your calculations. Whereas the iterative version has memory
O(1)
, your spent recursive versionO(n)
from memory. It is also worth noting that the cost of a function call is not of the same unit of magnitude as the cost of a mathematical operation, being usually heavier (much more)– Jefferson Quesado
Moreover I cannot imagine how a simple assignment operation can have a quadratic cost in the worst case and 0 in the best. It should be 1 in any case. The same goes for a comparison. Looking simply. Anyway, measuring complexity is not measuring speed. Speed involves a huge amount of factors, including the time being executed. If you want to know the speed of running both and check. Changing the condition in which you are running the speed can change. If you want to know the complexity, if I haven’t lost something, it’s to have the same complexity.
– Maniero
@Maniero I think he considers that in the best case that code is not executed (which is right, btw)
– Jéf Bueno
@Maniero as LINQ said in the best case that part of the code is not executed and therefore has zero cost, in the worst case that code is executed every time and so it costs (n²+n)/2, since it will assign 1 in it every time, the same way in if.
– Hatus Daniel
@Jeffersonquesado I don’t know if I’m right, but the cost of the recursive algorithm is it times the amount of times it’s called, isn’t that?
– Hatus Daniel
@Hatusdaniel this would be true if the cost were homogeneous. I may have a sorting algorithm that does
o(n log n)
readings ando(n log n)
exchanges, but it is less efficient than an algorithm that doeso(n^2)
readings howevero(n)
trades. If the cost of the exchange is greater than the cost of reading, type much higher, the second algorithm is asymptotically linear to the first, which is linear. So yes, in complexity analysis you need to take into account the heterogeneity of the cost of operations– Jefferson Quesado
@LINQ but it’s analyzing line by line and not the algorithm as a whole. That’s not how you measure complexity, you measure atomic form. If you are going to do the analysis on the line it has to be only in what is done on that line. If you are going to analyze the loop it has to be done on the whole and not on its lines.
– Maniero
@Maniero I don’t think you understand, what I calculate line by line is the total cost, it’s like the cost of line times how many times I’ll go through it, it helps when you do the cost function, because then I just need to add up the lines I need, for example the exchange cost function would be CT(n) = (x line cost) + (y line cost) and so on. And then having the exchange cost function I can calculate the exchange order.
– Hatus Daniel
@Jeffersonquesado understood what you say, which has the complexity of exchange and the complexity of reading/exchange. But the algorithm as a whole would have a complexity, no?
– Hatus Daniel
By the comment directed to me I think you did not understand how to calculate complexity, by the comment directed to the Jeffersonquesado understood, but is navigating the two forms.
– Maniero
@Maniero, I’m sorry, man, but I don’t think you understand, the way I use to calculate the order of complexity of an algorithm is this: I take the cost function, and after that I find the function that asymptotically dominates that function, using the way teaching in the book of Ziviani and several other authors and some videos I found on the internet, but I saw that it has some different forms. But thanks just the same.
– Hatus Daniel
I prefer to use the one everyone uses, not the one you decided to use. But you can do as you wish. I helped in what I could.
– Maniero
@Maniero ok, sorry if I stressed you out.
– Hatus Daniel