Time spent developing tests

Asked

Viewed 2,188 times

38

I’m afraid I’m worrying too much about testing, as I’m currently spending about 40% of my project’s time just creating them.

I know that the more time I spend with tests, the less I will spend to fix bugs.

But just as there is ABC curve to define the items with the greatest impact on economic research and business strategies, perhaps there is some analogous study in the area of software development that calculates an ideal amount of time that should be spent on tests (since, as pointed out by the ABC Curve, from a certain point the test will cost more than the code itself that it tests).

Question:

What is the optimal time fraction of a project that should be reserved for the development of tests?

  • 2

    Wouldn’t it be interesting to also ask how much time the developer invested in testing his past projects? The answers (so far) were good, but do not clear much for those who have no experience. It would be nice to instigate to present a real number, not just the "ideal fraction of time". What do you think?

4 answers

28


TL;DR

There is no magic number to estimate time spent testing, just as there is no magic solution to the software estimation problem.

Some time ago I saw a presentation of a test specialist and, in short, the direction in relation to this subject was something like:

Use a magic number at first and then adjust the ratio according to your productivity and project quality level.

About software pet

In my graduate school, I developed a monograph on software estimation. At the time I chose this theme, I believed that I would find a magical method to determine the time of a project’s activities. As soon as I began to effectively research, obviously I realized that I had believed in a great lure.

One of the most interesting books I read on the subject was Software Estimation: Demystifying the Black Art of Steve Mcconnell, whose title already clarifies much about the essence of the estimates: they are attempts to predict the future. Estimates are guesses.

The consequence of this is that there is, and will never be, a definitive rule for estimating software development activities. In fact, "mathematical" estimation methods (COCOMO, Function Point) end up confusing its users in the sense that they end up believing that, because it is a mathematical and statistical method, the result will possess in itself accuracy guaranteed.

This is why, for example, agile methodologies do not use absolute values to estimate, such as hours and days. The story points (history points) are relative quantities that may vary by team, project and individual developer.

So beware of magical solutions that may try to sell you. Although some pet techniques appear better, in fact no one can say absolutely which method is best. To do this would be the same as stating that you have a method to play in the lottery best that other people but the result is random.

There is a solution?

The technique most indicated by authors and experts in estimation is to measure productivity. Individually, this is one of the main objectives of PSP (Personal Software Process or Personal Software Process). When it comes to teams, one of the pillars of Scrum, the Inspection, should allow to monitor the progress and productivity of the team.

Although the estimation, whether of tests or any other software development activity, is a task more of intuition than a scientific process, in general it is observed that the estimates are closer to the future reality when we base on historical data.

Estimate x Commitment

A common mistake in everyday life is to confuse estimates with a commitment on the part of developers.

For example, let’s imagine that a company decides to estimate the tests with a magic rule of 50% of development time. But the developers realize they’re spending a lot more time coding tests. One of the common reactions is to try to speed up the pace or write fewer tests than planned so as not to "delay the schedule", as if the initial estimate was an obligation to be followed. The ideal would be to review the initial estimate and not try to adjust to it, but in practice...

Software managers often lack the firmness to make the staff wait for a good product (The Mythical Man-Month, 1975)

Quality is a determining factor

The previous quote was extracted from a article on the Iron Triangle that I wrote some time ago. This concept, the triangle, is important because it demonstrates that quality has a proportionality with time.

This implies that more quality requires more time. Therefore, the decision to invest in more or less tests at the beginning of the project will directly influence the final quality of the product.

Decreasing time spent testing without harming quality

The title seems to contradict what I just said. But if we take the concept of separation between essential and accidental development activities as Brooks does in At the Silver Bullets, we can say that, although there is no way to avoid the tests without decreasing the quality, we can reduce the accidental difficulties of their creation.

This can be achieved in some ways:

  • Training the team to improve productivity
  • Using more suitable tools that facilitate the creation and execution of tests
  • Investing in automation
  • Using technologies (frameworks, platforms) that facilitate testing
  • 3

    Interesting answer (upvote). I just missed (as I commented in the question) a number based on experience in past projects, i.e., "in my last projects we invested X % of the time in tests".

  • 4

    @Joséfilipelyra Unfortunately my last professional projects were correction and maintenance, so it’s been a while since I took part in a complete development cycle. However, what I can say about individual activities in which I have the time, based on personal metrics, is an average of 1x to 1.5x the development time to create effective unit tests before coding, that is, covering scenarios with limit values and some errors, plus 1x - 1.5x the development time for integration tests after the coding.

  • 4

    Notes: [1] the time invested in the unit test before coding, effectively decreases coding time. [2] The proportion of test time varies according to the complexity of the routine/method. [3] In summary, I can say that the total test time (when done properly) varies between 2x and 3x the project time. [4] In previous projects that do not have unitary or automated tests (some in which I made corrections), the time spent with corrections tends to infinity, because the error rate never stabilizes, since we cannot correctly measure the impact of the changes.

13

I believe that this will depend on how important its application is, and what the consequences of incorrect behavior are. A more or less innocuous application (such as a video game) will have less consequence in case of failure than one that does the accounting of your company, which in turn will be smaller than a system that controls a medical device.

I’ve heard of "dedicating 3x to testing for 1x to development" or "hiring a full-time tester for every 2 developers", but that’s just guidelines, does not replace a case-by-case analysis of its field of application.

from a certain point the test will cost more expensive than the code itself that it tests

As I said, depending on the application this may not be the threshold (Threshold) more appropriate. In the case of the medical device, for example, no matter if the test costs several times more than the development, it matters whether the cost of a failure (processes, idenizations, etc.) is much higher than the cost of the development + testing (and this from a purely capitalist perspective, without taking into account the possibility of loss of human lives).

I know that the more time I spend with tests, the less I will spend to fix bugs.

Testing is not the only (or best) way to prevent bugs. Formal code reviews, for example, they tend to contribute more to the quality of a software than a numerous number of test cases (ideally the reviewer is a different person than the developer [of a specific snippet]). Adopting good programming practices also helps.

In short, prevent bugs from appearing is more efficient than detect them and seek their cause. Don’t forget that even if your numerous unit and integration tests detect a bug, you’ll still have to isolate and fix them. The main advantage of these tests is detecting these bugs early - and continuing to monitor the code as the system evolves - but they don’t do much to reduce the time you spend with the debugging task.

Finally, it’s good to remember that a good system architecture can help a lot in preventing bugs, especially if your project has a large team of developers. The bigger the coupling among the various components of your application, the greater the chance of a change in one causing bugs in others. If each component is being developed/modified in parallel by different people, this problem gets worse. The extension of the ideal "preventive effort" is therefore greater in a system developed by a large team (where there is the classic problem of multiplicity of communication channels) than in a small team/single developer.

  • 2

    "As I said, depending on the application this may not be the most appropriate threshold (Threshold).". Excellent! The quality of the software (and consequently the form and quantity of tests) are requirements that vary from application to application. The test eventually costs more than the code it tests is not necessarily bad: if the requirement is high quality software, nothing more natural than investing time in money on things that increase the quality of the software.

7

In fact, the comparison should not be made in terms of time spent writing the code versus time spent writing tests for the code. The comparison should be between time spent writing automated test versus time spent testing the application manually.

The point is: every code that is written must be tested before it is delivered. And it is virtually impossible to perform effective manual testing after each change in code (apart from the fact that a manual test takes much longer than an automated one).

So even though there’s no magic recipe that tells you exactly what to test and "how much" to test, I would say a good tactic is to have great code test coverage that deals with business rules, a good coverage of integration tests to know if the components are communicating correctly with each other, a reasonable coverage of system tests (including user interface tests), and a small coverage for manual tests. In the beginning, what is a "good" coverage and what is a "great" coverage goes a lot from the chutometer, but from the moment you get more experience and real data, these values will adjust automatically. Actual data may be, for example, the number of bugs found by the QA area and/or users.

2

Actually, how you test before you start programming, I think it’s a matter of practice. But as you will be writing better quality code, you should not worry about the time you "spend" on testing, because as you said yourself will not spend on bugs, but in addition you are thinking about the whole development, so it’s not because you’re not writing the system code itself that you’re not producing, it’s a misguided view of when you start working with TDD

Browser other questions tagged

You are not signed in. Login or sign up in order to post.