Java Performance of methods-filled and empty classes

Asked

Viewed 477 times

13

If I create a class whose goal is to group a number of related methods. Suppose a Class Boi{} and it has several methods but no element.

For example:

public class Ruminante{
      public String mugir(Animal animal){
            return "Muuu";
      }
      public void comer(Mato grama){
             //processa grama
      }
      public void darCabecada(Animal animal){

      }
      public void darCoice(Animal animal){

      }
}

And when I want to call make:

public class Principal(){
       Boi boi = new Boi();
       Ruminante r = new Ruminante();
       r.mugir(boi); 
}

What is the loss of performance? Are there other disadvantages in doing it this way? It’s much worse than if everything in the Ruminant class was static?

2 answers

9


Java instance methods are virtual by default. This means (saved in static methods or marked as final) whenever one of them is called, there is a lookup in the virtual table of the object to determine whether the object is actually of the class Ruminante (and not of one of two subclasses) and depending on the object class, the pointer to the function to be called is obtained. If the function is marked as static or final, then the linking is done statically, i.e., the lookup in the virtual table is not required, and the code is "more efficient".

Now, having said the paragraph above, for the large, vast majority of applications this "performance loss" will not make any difference. Except in the case where you are running the function in a very large loop, without other calls, the loss of performance will be insignificant. In the vast majority of cases, it is better for you to write code that makes sense (that someone can endeavor it and keep it easily) than to think about early optimizations. Or according to Donald Knuth: "Premature Optimization is the root of all evil." (premature optimization is the root of all ills).

9

Optimizations of the Compiler

One advantage of Java over languages compiled as C++ is that JIT (Just in time Compiler) can perform optimizations from the bytecode the moment the code will be executed. In addition, the Java compiler itself is prepared to perform various optimizations already in the build phases.

These techniques allow, for example, transforming a call to a method into code inline within a loop, thus avoiding the overhead repetitive search methods in polymorphic calls.

Make a method call run inline means that the method code is executed as if it had been written directly at the place where the method is called. Thus, there is no overhead search method to be executed, memory allocation, new variable context.

Code that performs best

There’s an Oracle article entitled Performance Techniques (Performance Techniques) that explicitly talks about "what code format does JVM best optimize" (What code shapes does the JVM optimize best?).

Let’s see some points:

  • Methods static, private, final are easy to put inline.
  • Calls to virtual methods or interfaces are often placed inline if the class hierarchy allows. A dependency is registered if a new hierarchy class is later loaded.
  • If there are multiple calls to a virtual method or interface the code will be optimized with an "optimistic check". If the concrete type changes from one call to another, then the code will be "de-timized", then searching in the virtual methods table. For example, in a loop with a list of objects of different types. See here an article demonstrating the difference in performance.
  • Calls to "monomorphic" types (i.e., not polymorphic) are easier to place inline.
  • The first call to a polymorphic type can generate a inline cache, that will make consecutive calls to the same object faster.

Premature optimization

Despite the "curiosities" above, I fully agree with @carlosfiqueira: premature optimization is bad, it is preferable to write readable code.

However, every rule has its exception.

It is important to remember that the development of software is always based on assumptions. The final quality of the software depends on how correct these assumptions were. So, if the domain of the problem is studied correctly and it is known beforehand that certain routine will need to process a large volume of data, it is possible to make certain architectural and implementation decisions to ensure performance without inhibiting good modeling.

However, this is only feasible when there are experienced and specialized developers, able to write robust and "performatic" code in their areas of expertise. It works more or less like this: first you write a system thinking about the model, business rules and maintainability. Then you will face several challenges, one of them being the performance, having to refactor the system. Therefore, in the next project, the lessons learned will help you make design decisions (design) which will probably save you a lot of heartache.

It is also possible to optimize early when the principle of "solving the problem first and coding later" is followed, because through the static analysis of an algorithm one can get an idea of the possible bottlenecks.

Of course the first optimizations do not arrive at such a low level to think bytecode, called polymorphic and methods final. I do not believe that it is possible to develop a fully optimized system at first, because experience shows that in all cases many of the initial pressuspostos of a software change throughout the project.

What I could say to synthesize my thinking is that experienced programmers, rather than trying to optimize too much, simply avoid writing bad code and making silly mistakes that beginners usually make, as well as frequently testing rather than blindly trusting that your decisions are always right.

Completion

Writing good, readable and performing code is not something you can achieve with a set of strict rules.

However, it is possible to optimize the code for specific situations.

In addition, experience (uniting study and experimentation) makes the developer naturally produce code of more quality and fast.

In most everyday systems, a profiler will certainly show coarse processing bottlenecks at a much higher level than method calls, usually associated with Data Read and Write (I/O), which solve 80% of performance problems.

  • 1

    Very good, @utluiz. To top it off, I just suggest that you explain what inline is, because your link, in addition to sending to the homepage of the site, looks like it would send to English content.

  • 2

    @Pablo Valeu! I made some adjustments. After more than a year there is much that would give to improve. D

Browser other questions tagged

You are not signed in. Login or sign up in order to post.