TL;DR
There is no magic number to estimate time spent testing, just as there is no magic solution to the software estimation problem.
Some time ago I saw a presentation of a test specialist and, in short, the direction in relation to this subject was something like:
Use a magic number at first and then adjust the ratio according to your productivity and project quality level.
About software pet
In my graduate school, I developed a monograph on software estimation. At the time I chose this theme, I believed that I would find a magical method to determine the time of a project’s activities. As soon as I began to effectively research, obviously I realized that I had believed in a great lure.
One of the most interesting books I read on the subject was Software Estimation: Demystifying the Black Art of Steve Mcconnell, whose title already clarifies much about the essence of the estimates: they are attempts to predict the future. Estimates are guesses.
The consequence of this is that there is, and will never be, a definitive rule for estimating software development activities. In fact, "mathematical" estimation methods (COCOMO, Function Point) end up confusing its users in the sense that they end up believing that, because it is a mathematical and statistical method, the result will possess in itself accuracy guaranteed.
This is why, for example, agile methodologies do not use absolute values to estimate, such as hours and days. The story points (history points) are relative quantities that may vary by team, project and individual developer.
So beware of magical solutions that may try to sell you. Although some pet techniques appear better, in fact no one can say absolutely which method is best. To do this would be the same as stating that you have a method to play in the lottery best that other people but the result is random.
There is a solution?
The technique most indicated by authors and experts in estimation is to measure productivity.
Individually, this is one of the main objectives of PSP (Personal Software Process or Personal Software Process). When it comes to teams, one of the pillars of Scrum, the Inspection, should allow to monitor the progress and productivity of the team.
Although the estimation, whether of tests or any other software development activity, is a task more of intuition than a scientific process, in general it is observed that the estimates are closer to the future reality when we base on historical data.
Estimate x Commitment
A common mistake in everyday life is to confuse estimates with a commitment on the part of developers.
For example, let’s imagine that a company decides to estimate the tests with a magic rule of 50% of development time. But the developers realize they’re spending a lot more time coding tests. One of the common reactions is to try to speed up the pace or write fewer tests than planned so as not to "delay the schedule", as if the initial estimate was an obligation to be followed. The ideal would be to review the initial estimate and not try to adjust to it, but in practice...
Software managers often lack the firmness to make the staff wait for a good product (The Mythical Man-Month, 1975)
Quality is a determining factor
The previous quote was extracted from a article on the Iron Triangle that I wrote some time ago. This concept, the triangle, is important because it demonstrates that quality has a proportionality with time.
This implies that more quality requires more time. Therefore, the decision to invest in more or less tests at the beginning of the project will directly influence the final quality of the product.
Decreasing time spent testing without harming quality
The title seems to contradict what I just said. But if we take the concept of separation between essential and accidental development activities as Brooks does in At the Silver Bullets, we can say that, although there is no way to avoid the tests without decreasing the quality, we can reduce the accidental difficulties of their creation.
This can be achieved in some ways:
- Training the team to improve productivity
- Using more suitable tools that facilitate the creation and execution of tests
- Investing in automation
- Using technologies (frameworks, platforms) that facilitate testing
Wouldn’t it be interesting to also ask how much time the developer invested in testing his past projects? The answers (so far) were good, but do not clear much for those who have no experience. It would be nice to instigate to present a real number, not just the "ideal fraction of time". What do you think?
– José Filipe Lyra