An analogy
When a teacher requires his students to do a certain task, what is one of the most important questions to be asked?
"What is the delivery time?"
The delivery can be immediate, for the next day or until the end of the semester, it is not?
We can use this analogy to understand what level of optimization we should apply in a certain functionality or even in a whole software.
The non-functional requirements of a system shall specify the performance expectations of critical functionalities.
For example:
The system must support simultaneous access of up to 1000 users.
Or else:
Response time should not exceed 5 seconds for up to 10 thousand users.
Based on these characteristics (which are the subject of another study), adding our experience and perhaps a prototype, we can then decide how much to invest in optimization from an early age, although in this case I would not exactly call premature.
The problems
Micro optimizations are a mistake
The main problem with optimizations that are premature is when the developer assumes he’s making code faster, when in fact he doesn’t even have evidence that it’s actually going to happen in the "finally".
Much has been discussed here at Sopt about micro benchmarks and small differences between commands, just to cite an example. In the link quoted, it is stated that multiplication is faster than division in Javascript. If one takes this for truth always, one will end up realizing that, in fact, in some browsers the opposite may be true, depending on some circumstances.
And we haven’t even started talking about performers, Jits, cache and other dynamic optimization mechanisms that interfere with all of this.
Therefore, virtually every micro-optimization is doomed to failure, sooner or later, since something can be done automatic, then this may be automated in the next version of the compiler or interpreter.
On the other hand, still in the Javascript line, there are projects that demand high performance. An example is the tracking js., that implements real-time computer vision, where each processing cycle counts.
In cases like this, these micro optimizations are welcome, but this is hardly achieved already in a first development, on the contrary, practical tests will show which optimizations actually positively affect performance.
Requirements change
Also, as we all know, requirements are extremely changeable as far as the opinion of the end user of the system.
So another big problem is that premature optimizations throw away the time invested and, consequently, the money.
Real effect
Another point is that many premature optimizations are virtually useless. For example, using a field byte
instead of int
in the database may seem an interesting "optimization" during the beginning of the modeling, but ultimately, if the system makes a select *
the gain will be practically nil.
If we apply the Pareto Principle here, we would say that 80% of performance problems should be at most 20% of the code. This means that, most likely, we could achieve proper performance for a common system by focusing only on priority features. And then we get to a point where the effort to improve becomes so great that it’s just not worth it.
Completion
We must avoid labels.
Optimization is always welcome when we know what we’re doing and we have a reason for it.
I could say that a premature optimization is a thoughtless optimization or even a unnecessary optimization, that brings more loss than I earn.
So, finally answering the general main question, an optimization turns out to be bad when it gets in the way more than it helps.
Relevant: http://www.xkcd.com/1319/
– Oralista de Sistemas