16
I was reading a article related to the performance of streams/loops, and was startled by the difference in performance between the use of loops for large quantities of data.
I decided to perform a quick test using the following code:
public static void main(String[] args) {
final int limite = 10_000;
long inicioFor = System.currentTimeMillis();
for(int i = 0; i < limite; i++) {
System.out.println(i);
}
long terminoFor = System.currentTimeMillis();
long inicioStream = System.currentTimeMillis();
IntStream.range(0, limite).forEach(System.out::println);
long terminoStream = System.currentTimeMillis();
System.out.println();
System.out.println("Usando for: " + (terminoFor - inicioFor) );
System.out.println("Usando stream: " + (terminoStream - inicioStream) );
}
It’s a relatively simple code, but I noticed something interesting:
- When the
limite
is defined with value10_000
, the resulting time is as follows:
Using for: 54
Using stream: 72
- When the
limite
is defined with value1_000_000
, to stream is relatively faster:
Using for: 4314
Using stream: 4202
I believe that this is a simple test without the use of benchmark among other metrics.
I would like to know why there is this difference in processing time using streams and loops, may not be correct, but it seems that streams perform better the higher the amount of data (walking to the issue of streams infinite).
Complementing with some information posted by @Victor, I had extremely different results withdrawing IO operations:
- When the
limite
is defined with value10_000
:
Using for: 0
Using stream: 36
- When the
limite
is defined with value1_000_000
:
Using for: 2
Using stream: 43
As also pointed out by @Maniero, the reason should be the high cost to make the infrastructure work accordingly.
In fact the IO was affecting the result, testing the same example by removing the IO operation was clear even in my example that the stream is actually slower :)
– nullptr
Was it only 20% difference in performance? I expected something more glaring
– Jefferson Quesado