You’re trying to make a correlation between theory and practice and as we know in theory both are equal, in practice no.
Theory: Using O() notation you study the behavior of the algorithm, i.e. how the cost* would be depending on the size of the input. Note that the O() notation will not give us a function with well-defined parameters (there is no way to guess the "K"), it will only tell us whether it is polynomial, exponential, etc.
Note that the cost* is not in time but in "operations" and even these operations depend on a definition in your problem. In general for sorting algorithms transactions are considered to be item swaps.
Remember that in these algorithms it is common to have better case is worse case (ex list is ordered, list is ordered in descending order) and even this can affect your tests.
Practising: To check how long it takes a program to run will depend on various hardware factors: CPU, Memory, etc.
System factors: Which operating system, etc.
Technology factors: Which language/compiler used, etc
etc: load of other systems being used as anti-virus, firewall, etc.
To verify this you will put your program to run and measure how long it took to finish processing a specific data mass. If you plot a graph with these results for data masses of different sizes in theory the graph will +- adhere to the graph expected in the theory (and you can even risk finding the constants, the "K"s). Of course several factors will contribute to this not being so simple, like using a lot of memory and the OS starting to use paging.
In addition your program will do various operations, even OS call operations and other details that are not provided in the model.
Summary: There is no direct correlation between the theoretical cost of an algorithm calculated with big O and the time in ms of each operation. You may even try to find a correlation for a particular scenario but only in "laboratory conditions" and for a very specific environment.
Pedro, I don’t think things are that direct. Asymptotic complexity assumes that all operations are equal (which is not true), disregards pipeline, branch predictions, etc. It is not very difficult to find quadratic algorithms faster than counterparts
n log(n)
for a whole range of possibilities. In practice I think trying to estimate the actual running time of an algorithm by the asymptotic complexity * average time per operation will not work very well.– Anthony Accioly
The only way is to create a code and a roll of data and testing. Even this test will only be valid in the specific conditions where it is being tested. But if the intention is to know something else, modify the question to express it better.
– Maniero
I wanted to know in this case that I quoted... the expected time would be O(1000000) what in ms it represents? and given that time how to find an approximate running time of this algorithm in java?
– Pedro Rangel
And the answer is, you can’t. You can try to find a (bad) estimate for your program, in your language, on your computer, on your operating system, on your VM, on the current load of your system, with current GC policies on the JVM, etc, etc. But there is no direct translation of
n
forms
. You will need to measure (running the algorithm thousands of times for various input sizes, trying to fetch an average value per operation).– Anthony Accioly