When I test the API I must re-test the cases that have already been tested in the service?
No, you mustn’t.
The problem is not redundancy in the sense that the same code is run by tests at different levels. The problems are:
See, make an exception in case you can’t save the Validation is a "service" rule, so it is correct to test this rule when testing this service. But you should not test this rule in the above layer ("API") because this is not a rule of this layer.
Even if we simplify and exchange the term "layers" for "objects", in your test you are checking on an object a rule that is the responsibility of another object, and that is why you got this strange feeling of being testing twice the same thing.
Your API isn’t doing much, it’s hard to decide what to test there. In these cases, I wouldn’t usually test anything. If I don’t know what to test, why would I write a test?
If, on the other hand, the project requires 100% test coverage, you can test whether the API returns an object Validation with the properties filled as expected, so you would cover the lines of this API without testing something other than its responsibility and maintain 100% coverage.
Other notes on your code - Exceptions
In Validatyodao.save() you are returning null as error code. Null is a bad error code.
If a method failed to do its job (failed to accomplish what its name suggests) it should either throw an exception or return an error code (if its design decision is to work with error codes instead of exceptions).
The semantics of null is "unknown value" or "unseen" or "nonexistent", and this in itself does not suggest an error (eventually the consumer can decide that this is a mistake given the context).
It may be useful to return null in a method find, for example, indicating that what was sought was not found; and then the consumer code decides what to do, for example doing nothing or making an exception if what was sought should, in the given context, be there.
Anyway, it doesn’t seem to me that you’re interested in error codes, since you’re actually making an exception when detecting one null. In this case, instead of returning null, the method Validatyodao.save() He should make the exception himself if he can’t do his job. Or even, he should not explicitly make any exception but only let propagate an exception that might prevent him from doing his job.
Completion
Fatally, when testing a higher layer, the rules of the lower layers will come into action. But a test should explicitly check only the rules of the layer or object it is testing, and should not check the specific rules of the layers below that in theory it does not know. Example:
API:
void facaAlgo() {
if (condicaoRuim_X) {
throw new ExceptionA("As condições estavam desfavoráveis na API");
}
Service.facaAlgo();
}
Service:
void facaAlgo() {
if (condicaoRuim_Y) {
throw new ExceptionB("As condições estavam desfavoráveis no Service");
}
}
Now, when testing "service", I check the exception Exceptionb and when testing the API I check the Exceptiona.
If there was no logic in the API, then I could:
- or not testing this API method;
- or test only the happy case, validating the results in case everything works out, ignoring the specific rules of the layers below.
As to mocks, use as little as possible. They take a lot of work and can make our lives miserable, which is the opposite of the goal of automated testing. See in these responses a little more about mocks and other "stuntmen":
The purpose of integration testing is, as the name implies, to test how the various components/layers work (integrate) together. Therefore, they should not address the particular issues of each component/layer, which should be tested through unit testing .
– ramaral