Optimizations of the Compiler
One advantage of Java over languages compiled as C++ is that JIT (Just in time Compiler) can perform optimizations from the bytecode the moment the code will be executed. In addition, the Java compiler itself is prepared to perform various optimizations already in the build phases.
These techniques allow, for example, transforming a call to a method into code inline within a loop, thus avoiding the overhead repetitive search methods in polymorphic calls.
Make a method call run inline means that the method code is executed as if it had been written directly at the place where the method is called. Thus, there is no overhead search method to be executed, memory allocation, new variable context.
Code that performs best
There’s an Oracle article entitled Performance Techniques (Performance Techniques) that explicitly talks about "what code format does JVM best optimize" (What code shapes does the JVM optimize best?).
Let’s see some points:
- Methods
static
, private
, final
are easy to put inline.
- Calls to virtual methods or interfaces are often placed inline if the class hierarchy allows. A dependency is registered if a new hierarchy class is later loaded.
- If there are multiple calls to a virtual method or interface the code will be optimized with an "optimistic check". If the concrete type changes from one call to another, then the code will be "de-timized", then searching in the virtual methods table. For example, in a loop with a list of objects of different types. See here an article demonstrating the difference in performance.
- Calls to "monomorphic" types (i.e., not polymorphic) are easier to place inline.
- The first call to a polymorphic type can generate a inline cache, that will make consecutive calls to the same object faster.
Premature optimization
Despite the "curiosities" above, I fully agree with @carlosfiqueira: premature optimization is bad, it is preferable to write readable code.
However, every rule has its exception.
It is important to remember that the development of software is always based on assumptions. The final quality of the software depends on how correct these assumptions were. So, if the domain of the problem is studied correctly and it is known beforehand that certain routine will need to process a large volume of data, it is possible to make certain architectural and implementation decisions to ensure performance without inhibiting good modeling.
However, this is only feasible when there are experienced and specialized developers, able to write robust and "performatic" code in their areas of expertise. It works more or less like this: first you write a system thinking about the model, business rules and maintainability. Then you will face several challenges, one of them being the performance, having to refactor the system. Therefore, in the next project, the lessons learned will help you make design decisions (design) which will probably save you a lot of heartache.
It is also possible to optimize early when the principle of "solving the problem first and coding later" is followed, because through the static analysis of an algorithm one can get an idea of the possible bottlenecks.
Of course the first optimizations do not arrive at such a low level to think bytecode, called polymorphic and methods final
. I do not believe that it is possible to develop a fully optimized system at first, because experience shows that in all cases many of the initial pressuspostos of a software change throughout the project.
What I could say to synthesize my thinking is that experienced programmers, rather than trying to optimize too much, simply avoid writing bad code and making silly mistakes that beginners usually make, as well as frequently testing rather than blindly trusting that your decisions are always right.
Completion
Writing good, readable and performing code is not something you can achieve with a set of strict rules.
However, it is possible to optimize the code for specific situations.
In addition, experience (uniting study and experimentation) makes the developer naturally produce code of more quality and fast.
In most everyday systems, a profiler will certainly show coarse processing bottlenecks at a much higher level than method calls, usually associated with Data Read and Write (I/O), which solve 80% of performance problems.
Very good, @utluiz. To top it off, I just suggest that you explain what inline is, because your link, in addition to sending to the homepage of the site, looks like it would send to English content.
– Pablo Almeida
@Pablo Valeu! I made some adjustments. After more than a year there is much that would give to improve. D
– utluiz