Software Testing
Using the terms of Wikipedia:
It is the research of software in order to provide information about its quality in relation to the context in which it should operate. This includes the process of using the product to find its defects.
Testing is the most general term to see if a particular software works. It can be an entire product, part of it, a method, etc.
The test can be performed by the developer himself, a specialist tester, or a system user. This can occur at any stage of the project, depending on the adopted model (cascade, iterative, evolutionary).
In most projects there is a testing phase, when usually the features of the version are closed the biggest focus of the team is to discover and fix defects still hidden.
Tests can be divided into several types and classifications.
Knowledge about the software
- White box test: when evaluating the internal functioning of the software. For example, if certain methods perform correctly.
- Black box test: when evaluating the behavior of the software, through its interfaces. For example, when the user uses the system to see if it returns expected values after a calculation.
The nature of the test
Tests can be performed at various levels and for various purposes:
- Unitary Test: tests specific parts of the system, such as classes and methods.
- Integration test: tests several components of a system running at once.
- System test or Homologation or SIT (System Integration Testing): system execution from the user’s point of view, although not performed by the end user.
- Acceptance test or UAT (User Acceptance Testing): test performed by the user to verify that the software is in accordance with what was contracted.
- Regression Test: tests already performed are run again after modifications to the software to ensure that there was no unexpected side effect.
Non-functional testing
In addition to verifying that system implementations are correct, certain types of tests verify non-functional aspects of it. For example:
- Test of Performance (Performance): checks system performance with a normal load of users. For example, the average response time is 2 seconds with up to a thousand users.
- Load Tests (Volume): checks the maximum capacity of the system, i.e., the point where it hangs or fails to respond in an appropriate time.
- Resilience Test (Stress): checks the behavior of the system and its ability to recover from unexpected failures such as power outage, database failure, access spikes.
Test automation
It is possible to perform all types of tests without automation. On the other hand, there is great advantage in automating some of them for effortless repetition of the same.
A Unit Test can be executed by creating an independent class or script to test methods and classes. In Java, it would be a method main
.
But if used in an automation framework, the same Unit Test can be run as often as needed. It would be a Regression Test at no additional cost. In Java this can be done with Junit.
TDD is a test-based development methodology. The main idea is to reverse the "traditional" sequence of development by putting the test first, before implementation.
You write each test according to the respective requirement. So you can track progress as each test fails and succeeds.
A lot of people write Unit Tests and think they’re doing TDD. It’s not the same thing. It turns out that teams that adopt TDD usually use automated Unit Testing to speed up the process.
Have a little more information about testing and TDD in this my other answer.
QA is not directly related to software. It is an area that tries to ensure quality in all aspects of a project or service through audit processes.
It has its own techniques, certifications and processes. All this is independent of the software development cycle.
There is a very important conceptual confusion about TDD here and the other linked response. They suggest that you need to have the requirements, write the tests, and then pass the tests - all plural. TDD is exactly the opposite: You need to have 1 tiny requirement, write 1 test and then pass the test, then you refactor and write the next test. TDD wants to allow the evolution of the requirements throughout the project. Each test ensures a small requirement and the test suite ensures that nothing has been broken. TDD gives the feeling!
– Caffé
@I agree in part. I changed the two answers to take the plural, because the original idea is to actually write each test individually, or at least have the ability to do that. Since TDD was based on XP, the idea is to evolve the system architecture gradually. In practice this rarely works because in a team with developers working in parallel the result would be parallel architectures, duplicate code and a huge job merging. It is best to have a lean initial architecture and at least the most critical requirements tests from the start.
– utluiz
@Caffé I also agree that TDD allows building the system incrementally, but to say that TDD "allows the evolution of requirements throughout the project" is exactly the opposite of what the author of the TDD himself says (see video at the end of the other answer). If the requirements "evolve" in the sense that they change over time (are unstable), then at each change the previous tests no longer represent the expected behavior of the system, therefore TDD may not be suitable unless it is worth it or it is possible to rewrite the tests countless times.
– utluiz
I watched the video now again (I watched this Angout and the others live at that time). I really can’t figure out where Kent Beck suggests that evolving requirements are not supported by TDD. All philosophy (and tools - including XP and TDD) developed over several years and formalized later in the Agile manifesto is reasoned in the fact that requirements change throughout the project. CHANGEOVER is the foundation of Agile and therefore the reason for its tools as TDD. Even another Agile tool is Seamless Integration, where the problem of laborious merges is solved.
– Caffé
Philosophy aside, what I have seen in practice is TDD, accompanied by other practices, working yes. Of course you have to learn how to do it first, and wikipedia and other misguided articles on the Internet unfortunately don’t help much. And of course TDD is not the only thing that works. I learn a lot every day from other programmers who do a fantastic job and make a lot of money for their companies and who don’t even know what TDD is (although some think they know).
– Caffé
In time, @utluiz, I’m not trying to give you anything new. It is clear to me that although we express ourselves in different ways about these concepts, you know them well. My intention is just to show another point of view for those who pass through here - the view that TDD works and is useful precisely in projects where requirements change.
– Caffé
@Caffé Perfeito. I probably didn’t give a complete and realistic picture of what I think about TDD. I defend methodology as a lifestyle, not only as a specific practice within development projects. However, my answer stressed the criticism I have on how the process ends up being executed in practice. The truth is that most of the projects I see start well, but end up abandoning TDD when many changes occur or in later stages because it takes a lot of work to refactor the tests as the requirements change.
– utluiz
Of course, one could argue that this is the result of the laziness of developers, inexperienced professionals, insufficient time, bad architecture, etc. But as the vast majority of companies suffer from these problems, so rarely can TDD in its pure form be well applied. As for the video, I probably made a mess. I think it was David who said something about TDD not being viable for certain types of projects. But although he’s right in one way or another, he still sounds like an immature teenager complaining about the previous generation.
– utluiz