Overall, responses so far address the measurement of efficiency in practical terms.
I will give you an answer in theoretical terms, since you ask two questions: How to determine the equations of complexity and whether there is another method.
To answer the first question, first of all I recommend an introductory book of algorithms that teaches Big-O notation.
The books I recommend are:
- Algorithms: Theory and Practice - Internationally known, particularly only had access to English version.
- Algorithms Unlocked - I did not have access, but it is well recommended for being a shorter version of the previous book. Written by the same renowned author.
- Schaum’s Outline of Discrete Mathematics - I like this book because it is very direct, more appropriate for those who are not beginners maybe. Not exactly focused on algorithms, it covers several important areas of computer science (particularly I feel I’m even more accessible than "Algorithms: Theory and Practice" at certain points when it comes to algorithms).
I’m going to reduce the analysis on kids, not rigorously, without involving loop invariants, etc..
First the easiest to analyze, case 2.
This case consists of only one loop that repeats an O(1) complexity operation. The number of repetitions is directly proportional to the function input value. If the argument x
for N
, then the loop will be repeated N
times. Apart from the loop, the body of the function only has other complexity operations O(1):
int Y(int x) { // O(1)
int soma=0; // O(1)
// O(1) O(1) O(1)
for( int i=0; i<=x; ++i )
soma+=i; // O(1)
// O() do loop = O(x) == O(1) + x*O(1) + x*O(1) + x*O(1)
return soma; // O(1)
}
// O() da função = O(x) == O(1) + O(1) + O(x) + O(1)
So this function has O(N) time complexity. Note that this function performs its calculation with a fixed amount of local variables only, so it is O(1) in the space used (i.e., it does not need more memory depending on the input).
The first case is a recursive function, and complexity analysis becomes recursive as well. There’s a solid theory for this, but I’ll stick to one
simplistic explanation.
Note that all operations in the body of the function, except the recursive call itself, are O(1). Recursive calls will occur until the termination condition is reached. Note that for an input x = N
, occurred N
calls. Since in each call, apart from the recursive call, O(1) operations are performed, then the time complexity of this function is also O(N).
The complexity in space of this function differs from case 2. Because it is a recursive function whose result needs to be aggregated as a sum at the end, it is not a function Tail-recursive, and therefore it is not possible for the compiler to optimize its function using this technique.
Given that Tail-recursion it is not possible, at each recursive call it is necessary that if Pops into the call stack a fixed number of data (function arguments, return address, etc.). That is, to N
calls, occurred N
data stacks of a given size, so the function is O(N) in the space it also uses.
Answering the second question: There is another method?
You have two solutions to the problem, one is O(N) in time and O(N) in space, one is O(N) in time and O(1) in space.
An alternative would be to look for a solution whose complexity does not depend on the input value, is that it is possible a solution O(1) in time and O(1) in space?
Note that:
- y = 1 + 2 + 3 ... + (x - 2) + (x - 1) + x
- y - x * x = (1 - x) + (2 - x) + (3 - x) ... + [(x - 2) - x] + [(x - 1) - x] + (x - x)
- -(y - x * x) = (x - 1) + (x - 2) + (x - 3) ... + 2 + 1 + 0
- x * x - y = 1 + 2... + (x - 3) + (x - 2) + (x - 1)
- x * x - y = y - x
- x * x + x = 2y
- y = x * (x + 1) / 2
From this formula (a basic deduction of the sum of the elements of an arithmetic progression), it is possible to obtain the sum value from the input value with complexity O(1), that is, without loops.
Then the most efficient method would be:
int Z(int x) { // O(1)
return (x * (x + 1)) / 2; // O(1)
}
(If you’ve ever wondered in life where you would use PA and PG taught in school, there you go ;-) )
If you wanted to be more efficient still, with C++11 you could declare the function as
constexpr
.
constexpr int Z(int x) {
return (x * (x + 1)) / 2;
}
Thus, most compilers that supports C++11 would compute the sum at compile time, incurring no runtime calculations, when the argument passed is a constant. If the argument is not a constant (i.e. a runtime value), the function achieves the same version efficiency inline
.
The second is nicer.
– Victor Martins
Explain better what you mean by "complexity equations"? You want to know which is the most efficient way ?
– lsalamon