Two different parameters that an accepted function can classify as having O(n²) complexity?

Asked

Viewed 179 times

8

Let’s say I define a function as follows, which accepts an argument a be a array two-dimensional numbers:

function totalSum(a) {
  let total = 0;
  for (const elem of a) {
    for (const subElem of elem) {
      total += subElem;
    }
  }
  return total;
}

// Exemplo de entrada:
totalSum([[1, 2], [3, 4]]); // 10

It can be classified as having complexity O(n²), right?


Now in the context of another function, which takes two arguments, a and b, both arrays one-dimensional:

function allIncludes(a, b) {
  let allExists = null;

  for (const aElem of a) {
    // Note que o `includes` abaixo irá iterar sobre cada elemento de `b` até o
    // fim dos elementos, ou até que algum elemento seja igual a `aElem`.
    // Uma complexidade próxima de `O(n)`.
    if (includes(b, aElem)) {
      // Se `exists` já for `true`, irá continuar como `true`.
      // Se for `false`, não será atribuído.
      if (allExists === null) {
        allExists = true;
      }
    } else {
      allExists = false;
    }
  }

  return allExists;
}

// Ignore:
function includes(arr, elem) {
  for (const el of arr) {
    if (el === elem) {
      return true;
    }
  }

  return false;
}

// Exemplo de entrada:
allIncludes([1, 2, 3], [3, 2, 1]); // true
allIncludes([1, 2, 3], [1, 2, 4]); // false

Can this last function be classified with complexity O(n²)? I ask this because, unlike the first function of this question, allIncludes() will accept two arguments, which may be of different sizes.

I personally consider this a case of "almost O(n²)" or "a little more than O(n²)", but surely this informal denomination on my part is not correct.

  • 1

    I don’t know if the title of the question is the most ideal one for this case. If anyone wants to suggest a better title, I would be grateful. :)

  • 1

    Does this Answer your Question? How to improve my code performance with "for"?

  • 1

    @Jeffersonquesado, I don’t think so. My question is not about improving the algorithm. I wondered if the arguments of a function could alter its complexity in the above examples, and not how to improve them... It turns out that the answers to the other question add a little to this question, but I do not believe that they are duplicates...

  • 1

    the problem was the author’s perception of cubic complexity of the question Inkei. His is the appearance of quadratic complexity. How do the questions differ? In my view, the answers to the question Inkei answered yours, including that of Maniero gave a deja vu when I saw above

3 answers

11


It’s enough to say it’s not O(n2) for a basic reason, if you have two dimensions and each one can have a different size (yes, it is possible), only use one n doesn’t make much sense.

In fact it is n there is not so obvious according to the statement. The n in fact is the m times n, where m is the quantity of elements of a dimension and n is the amount of elements of the other. Or we can just define that n is the total number of elements in the matrix. So the total of elements that must evaluate complexity is O(n). This multiplication I spoke of is just a way to find the value of n, but you can know it in some other way that gives the whole amount without making any account.

Complexity should be measured according to the total amount of elements and not the organization of that data in memory. This data structure has only n elements.

The algorithm runs on top of this amount and each of them is evaluated only once, so the complexity is linear.

When the size of the two dimensions are equal we can say that multiplication gives the same n times n and because of this it is n2, but O(n2) is different from O(n2), in this case specified of the same size of the dimensions, and only in this square matrix, O(n2) == O(n).

That is why the correct is to demonstrate complexity as O(n). If O(n2) would only work for a square matrix, that is, it would depend on the input you can’t guarantee.

See that placing a counter of times that there is an evaluation gives the same amount of total elements of the matrix:

function totalSum(a) {
  let total = 0;
  let contador = 0;
  for (const elem of a) {
      for (const subElem of elem) {
          total += subElem;
          contador++;
      }
  }
  console.log(contador);
  return total;
}

// Exemplo de entrada:
totalSum([[1, 2], [3, 4]]); // 10

I put in the Github for future reference.

The second is an algorithm fundamentally different from the first, no matter the input (it even matters in the sense that the algorithm deals with different things, but not that the input itself makes the difference in complexity directly).

It does a little different and is a little more complicated, it usually be smaller than linear in most cases.

Now here the amount of evaluations depends on the values you find, you can calculate the complexity, but the account is not within those patterns that we use in most cases, is given by a more complex formula.

Recalling that Big O complexity measures the worst case, then we can say that the second algorithm is O(n2) since there are entries that can make it quadratic.

let contador = 0;

function allIncludes(a, b) {
    let allExists = null;
    for (const aElem of a) {
        // Note que o `includes` abaixo irá iterar sobre cada elemento de `b` até o
        // fim dos elementos, ou até que algum elemento seja igual a `aElem`.
        // Uma complexidade próxima de `O(n)`.
        if (includes(b, aElem)) {
            // Se `exists` já for `true`, irá continuar como `true`.
            // Se for `false`, não será atribuído.
            if (allExists === null) {
                allExists = true;
            }
        } else {
            allExists = false;
        }
    }
    console.log(contador);
    contador = 0;
    return allExists;
}

// Ignore:
function includes(arr, elem) {
    for (const el of arr) {
        if (el === elem) {
            return true;
        }
        contador++;
    }
  return false;
}

// Exemplo de entrada:
allIncludes([1, 2, 3], [3, 2, 1]); // true
allIncludes([1, 2, 3], [1, 2, 4]); // false

I put in the Github for future reference.

10

The asymptotic complexity of an algorithm refers to the amount of iterations (minimum or maximum) that are made based on the amount of inputs to the algorithm. Moreover, its complexity is concerned with large values for these inputs (hence the term "asymptotic"). For example, if your algorithm receives n elements and performs some operation for each element, so the algorithm is linear, ie upper bound O(n). Its first function has linear complexity, as each element is iterated only once.

In your second case, your algorithm runs through the arrangement b once for each element of the arrangement a. It looks like this:

para cada elemento x em a:
    para cada elemento y em b:
        faça algo com x e y

To count how many iterations this algorithm makes, let’s assume that n is the amount of elements in the arrangement a, and m is the amount of elements in the arrangement b. How we go through the arrangement b to each element of a iterated, we have then n times m iterations. Therefore, the asymptotic complexity of this algorithm is O(n · m). This is a complexity polynomial and can be simplified to O(n²) for being an upper bound. The important thing is to note that the elements of the arrangement b are iterated n times.

Your second case, then, is considered polynomial, because if you count the amount of iterations made in total, you will arrive in the same complexity O(n · m). Look at that: allIncludes will call includes for each element of the arrangement a, and includes go through the arrangement b whole. The function includes is linear, O(n), but how she is called to each element of the a in allIncludes, then the final complexity turns out to be polynomial.

5

First algorithm

It can be classified as having complexity O(n²), right?

The first thing you should do is tell your n refers. Is the number of rows in the matrix? The number of columns? The total number of elements?

He is O(n) being n the total amount of elements in the matrix.

To demonstrate, I’ll modify your code:

function totalSum(a) {
  let total = 0;
  for (const elem of a) {
    for (const subElem of elem) {
      total += 1;
    }
  }
  return total;
}

Note that in the middle of the loops I made him always add up 1, so the return of the code is the total amount of iterations.

Run this code a few times with several different matrices. You should notice that the returned value is always identical to the amount of elements in the matrix. That is, the amount of iterations performed is exactly the amount of elements in the matrix. This makes the algorithm in question O(n) where n is the total of elements in the matrix.

Realize that it is not wrong to say that this algorithm is O(n²), for every function f(x) belonging to O(n) also belongs to O(n²), once the O(n) belongs to O(n²). However, despite not being technically wrong, asymptotic analysis usually uses the smallest possible function, that is, to say that the above algorithm is O(n²) suggests that he is not O(n), where he is yes.


Second algorithm

It’s possible that the analysis will be easier if you speak the algorithm behavior out loud:

For each element in the first array, itere by the elements of the second array

Note that the fact that the algorithm has been broken into two functions is irrelevant to the analysis, as well as that conditional within the includes also does not matter, because in the analysis of big-O is considered the worst case.

In this way, I hope it is trivial to conclude that the algorithm belongs to O(n*m), in which n and m are the amounts of elements in each of the arrays.

If you want to further simplify the completion of your analysis, you can simply consider that both arrays are the same size, making the algorithm O(n²).


I’ve answered other questions about asymptotic analysis, so you might be interested:

  • The matrix grows in a polynomial way, as there is no case where any matrix row has a smaller amount of values than its dimension. So much so that the spatial complexity of a matrix is polynomial.

  • 1

    @Márioferoldi A square matrix grows in polynomial complexity relative to the number of rows or columns (if it has 3 columns, has 9 elements, if it has 4 columns, has 16 elements, etc). My analysis of the algorithm uses the amount of elements in the matrix, not the number of rows or columns. The algorithm is clearly linear in relation to the amount of elements in the matrix.

  • @Márioferoldi Complementing: Perhaps the fact that there is a loop inside the other will confuse, but make no mistake, the fact that the algorithm receives a matrix is irrelevant, it could be a one-dimensional array, no matter how the data is organized in memory, what matters is that each element is visited only once, and that’s what makes the algorithm linear.

  • 1

    Right, it makes sense.

  • @Gabriel, asymptotic complexity analysis is done on top of the input parameters. For example, the sum of "numbers" represented by strings of digits on arbitrary basis. Should I consider it to be a linear algorithm or logarithm? The answer is: it depends on the use. If I want to know about the number N past, then it’s logarithm O(log N), but if it is in relation to the number of digits of that number len, then it’s linear O(len).

  • These are codependent variables, so changing the value of one implies changing the value of the other. To know which analysis to take into account, one must understand what is being measured. There are many cases where knowing that the complexity is logarithmic is more relevant than linear, there are other cases that the truth will be the opposite

Show 1 more comment

Browser other questions tagged

You are not signed in. Login or sign up in order to post.