Which code has the highest cost? (Bubblesort in C)

Asked

Viewed 501 times

1

void bubblesort(int *A, int n) {
    int i, aux;
    for(i=0; i<n-1; i++){
        if(A[i]>A[i+1]){
            aux = A[i];
            A[i] = A[i+1];
            A[i+1] = aux;
        }
    }
    if(n-1>1)
        bubblesort(A, n-1);
}
void bubblesortI(int *A, int n) {
    int i, aux, j; 
    int troca=0;                //1
    for(i=0; i<n-1; i++){       //Pìor C. = 2n          Melhor C. = 1;
        for(j=0;j<n-i-1; j++){  //Pior C. = n²+n-2      Melhor C. = 2n
            if(A[j]>A[j+1]){    //Pior C. = (n²+n)/2    Melhor C. = n-1
                aux = A[j];     //Pior C. = (n²+n)/2    Melhor C. = 0
                A[j] = A[j+1];  //Pior C. = (n²+n)/2    Melhor C. = 0
                A[j+1] = aux;   //Pior C. = (n²+n)/2    Melhor C. = 0
                troca =1;       //Pior C. = (n²+n)/2    Melhor C. = 0
            }
        }
        if(!troca)              //Pior C. = n-1         Melhor C. = 1;
            break;
    }
    //Pior caso: O(n²)  Melhor caso: O(n)
}

I’m studying ordering and complexity and ended up doing the recursive and iterative bubblesort code and wanted to define their order, only I can’t understand how to calculate the complexity of recursive codes.

In front of the iterative code is the cost of each line.

PS: If someone has a recursive algorithm complexity and cost calculation tip, I’ll be happy.

  • 3

    The recursive includes a dynamic memory cost that is not predicted in your calculations. Whereas the iterative version has memory O(1), your spent recursive version O(n) from memory. It is also worth noting that the cost of a function call is not of the same unit of magnitude as the cost of a mathematical operation, being usually heavier (much more)

  • Moreover I cannot imagine how a simple assignment operation can have a quadratic cost in the worst case and 0 in the best. It should be 1 in any case. The same goes for a comparison. Looking simply. Anyway, measuring complexity is not measuring speed. Speed involves a huge amount of factors, including the time being executed. If you want to know the speed of running both and check. Changing the condition in which you are running the speed can change. If you want to know the complexity, if I haven’t lost something, it’s to have the same complexity.

  • @Maniero I think he considers that in the best case that code is not executed (which is right, btw)

  • @Maniero as LINQ said in the best case that part of the code is not executed and therefore has zero cost, in the worst case that code is executed every time and so it costs (n²+n)/2, since it will assign 1 in it every time, the same way in if.

  • @Jeffersonquesado I don’t know if I’m right, but the cost of the recursive algorithm is it times the amount of times it’s called, isn’t that?

  • 1

    @Hatusdaniel this would be true if the cost were homogeneous. I may have a sorting algorithm that does o(n log n) readings and o(n log n) exchanges, but it is less efficient than an algorithm that does o(n^2) readings however o(n) trades. If the cost of the exchange is greater than the cost of reading, type much higher, the second algorithm is asymptotically linear to the first, which is linear. So yes, in complexity analysis you need to take into account the heterogeneity of the cost of operations

  • @LINQ but it’s analyzing line by line and not the algorithm as a whole. That’s not how you measure complexity, you measure atomic form. If you are going to do the analysis on the line it has to be only in what is done on that line. If you are going to analyze the loop it has to be done on the whole and not on its lines.

  • @Maniero I don’t think you understand, what I calculate line by line is the total cost, it’s like the cost of line times how many times I’ll go through it, it helps when you do the cost function, because then I just need to add up the lines I need, for example the exchange cost function would be CT(n) = (x line cost) + (y line cost) and so on. And then having the exchange cost function I can calculate the exchange order.

  • @Jeffersonquesado understood what you say, which has the complexity of exchange and the complexity of reading/exchange. But the algorithm as a whole would have a complexity, no?

  • By the comment directed to me I think you did not understand how to calculate complexity, by the comment directed to the Jeffersonquesado understood, but is navigating the two forms.

  • @Maniero, I’m sorry, man, but I don’t think you understand, the way I use to calculate the order of complexity of an algorithm is this: I take the cost function, and after that I find the function that asymptotically dominates that function, using the way teaching in the book of Ziviani and several other authors and some videos I found on the internet, but I saw that it has some different forms. But thanks just the same.

  • I prefer to use the one everyone uses, not the one you decided to use. But you can do as you wish. I helped in what I could.

  • @Maniero ok, sorry if I stressed you out.

Show 8 more comments

1 answer

2


In the iterative case, you calculated correctly.

From what I understand in the comments, you want to evaluate the asymptotic complexity of your algorithms. Asymptotic complexity analysis deals only with the algorithm and not with the language-specific details where the algorithm is implemented. In fact, if we were to consider everything, we would also have to take into account the operating system, hardware and more n other factors that weigh in the calculation of the cost of your code.

To avoid this fatigue, the asymptotic analysis deals only with the amount of times each line of the code will be executed, and at the end we consider only the factor of the highest degree (in the case of bubblesort is the ). If you want to know more about this, you can search on the notation Big-O on Google.

It is easy to calculate the asymptotic complexity of an iterative algorithm, but in the case of recursive algorithms, you will have to deduce what will be its complexity through a numerical analysis. Considering your algorithm:

void bubblesort(int *A, int n) {
    1 int i, aux;
    2 for(i=0; i<n-1; i++){
    3     if(A[i]>A[i+1]){
    4       aux = A[i];
    5       A[i] = A[i+1];
    6       A[i+1] = aux;
    7   }
    8 }
    9 if(n-1>1)
   10     bubblesort(A, n-1);
}
  • At first glance, we have already found that the loop line for will always be called n-1 times. Consequently, any code within that loop will also be subject to execution n-1 times. Since there is no other nested loop and no other function call or code interrupt worth taking into account, the complexity of this code snippet (lines 2 to 6) is O(n).
  • For the recursive call, you will need to analyze the stop condition in conjunction with the next call. If we decrease the n each time we call the function recursively, and if the stop condition causes the program to end when n-1>1 (or, rewriting, n > 2), then we know that the function will be called n - 2 times. Again, we will take into consideration only the n.
  • In total, the complexity of your code is O(n²), taking into account the repetition loop and recursive calls.

If we go to analyze another code:

int funcaoRecursiva(int n)
{
    if (n <= 0)
        return 1;
    else
        return 1 + funcaoRecursiva(n/2);
}

The complexity of this code is O(log n). Why? Because the function stops when n <= 0, and n is divided by two at each recursive call. That is, the function follows the time of f(n) = n/2, which is a simple example of logarithmic function in base 2.

I hope I’ve helped clear up your doubt.

  • Very good his explanation expensive, had not identified this difference of the analyses, so much so that I was asked to do asymptotic analysis in a code in a proof and I was lost, thankfully I did the right kkk. Thank you very much.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.