Before we simply give the result of the complexity of your example let’s first understand how it works and how we should calculate. So come on!
To calculate the complexity of a method or algorithm efficiently we will use the Big O notation, or asymptotic complexity, which by definition is:
"Let f(n) and g(n) functions map integers (size of the
input) in real (execution time). We say f(n) is O(g(n)) if
there is a real constant c > 0 and an integer constant N0 1 such
that f(n) cg(n) for every integer n N0."
Within asymptotic complexity we have several types of function. Below they are ordered from best (constant) to worst (exponential):
- Constant 1
- Logarithmic log n
- Linear n
- n-log-n n log n
- Quadratic n²
- Cubic n³
- Polynomial n k
- Exponential to n
In the image you can see the growth rates of the functions:
Remarks:
- It is very important to understand the main types of complexity function
because it is based on them that we will define the complexity of our
algorithms. However, enough of that and let’s get down to business.
- The notation O determines that one function f(n) is "less than or equal to" another
function g(n), discounting a constant factor, as n grows
to infinity.
- An A algorithm with complexity O(n²) will never have a runtime
above n², for a given input n, even in the worst case.
Rules for defining asymptotic complexity:
Polynomial function: always consider the highest degree.
- 5n 4 + 3n³ + 2n² + 4n + 1 is an O(n 4).
Constants and multipliers are eliminated.
- 4n³ is O(n³) -> I do not care the constant, in the long run it will be
unrepentant.
Mixed function: always consider the term of greater complexity.
- 5n² + 3n log n + 2n + 5 is O(n²). In this case the complexity of an n² is much greater than that of a log n, so we eliminate the same.
Always consider the simplest representation.
- 4n² + 2 log n is O(n²), which is definitely better than O(n² + log n).
Understood the rules we can see some examples (including your).
Example 1
public static int sumNumbers(int n1, int n2) {
int result = n1 + n2;
return result;
}
For example 1 we have constant operations throughout the method, so it has constant complexity O(1).
Example 2:
// Retorna true se não existe elemento duplicado no vetor.
public static boolean unique1(int[ ] data) {
int n = data.length;
for (int j = 0; j < n - 1; j++)
for (int k = j + 1; k < n; k++)
if (data[j] == data[k])
return false;
return true;
}
For example 2 we have a complexity of O(n²).
Note: A tip, whenever you have one for
within his method
it will run n times, so we would already have a complexity O(n).
If we have a for
within another, we have a complexity of O(n²),
because we are running n * n. However, always pay attention to details,
this is not guaranteed, each method is a case.
For your exercise we will have an O(n), with a for
second running n times internal operations with constant complexity.
Note: If you are in doubt about the operation Math.Pow know that it has complexity O(1).
To assemble this answer I based on the book by Goodrich and Tamassia, Data Structure & Algorithms in Java. I recommend reading.
Want to know the complexity using the Big O, in case, asymptotic?
– João Pedro Schmitz
Yes that’s right!
– Vinnicius Gomes
Question answered! I hope my answer will help you understand better complexity.
– João Pedro Schmitz
Possible duplicate of What is the complexity of an algorithm?
– hkotsubo
It is not duplicate. The questions are different.
– Henrique Felipe
Vinnicius, if any answer has solved your problem you can mark as accepted by clicking on the green V side of the chosen points. Or, if you want, you can leave it open for a while if you want more alternatives, but it is good that after it is resolved you mark some to close the subject. Learn more in "How and why to accept an answer".
– João Pedro Schmitz