Develop integration tests correctly

Asked

Viewed 2,051 times

3

We are writing several tests for the application, there are testes unitários and we started testes de integração.
All communication with DAO is mocked, but when I test the API I should re-test the cases that have been tested in service?

For example:
API

@RequestMapping(value = "", method = RequestMethod.POST)
    public Validacao create() {
        Validacao validacao = new Validacao()
                .setDataAtualizacao(DateTime.nowISODate())
                .setDataCriacao(DateTime.nowISODate())
                .setEmail(activeUser.getUser().getEmail())
                .setInstancia(activeUser.getInstancia().getInstancia());

        validacaoService.create(validacao);

        return validacao;
    }

Service

public Validacao create(Validacao validacao) {
        validacao = validacaoDAO.save(validacao);
        if (validacao == null) {
            throw new InternalServerErrorException("Ocorreu um erro ao criar a validação.");
        }
        return validacao;
    }

In the Service unit test, there is already a test case if the validation is null. Should this test be repeated again in the integration test? It is not a redundancy?
There are cases where my Service uses another Service, this should be mocked or create a new instance?

  • The purpose of integration testing is, as the name implies, to test how the various components/layers work (integrate) together. Therefore, they should not address the particular issues of each component/layer, which should be tested through unit testing .

2 answers

1


When I test the API I must re-test the cases that have already been tested in the service?

No, you mustn’t.

The problem is not redundancy in the sense that the same code is run by tests at different levels. The problems are:

  • You are explicitly testing the same rule twice.

  • You are testing on one layer of the system the rules of another layer.

See, make an exception in case you can’t save the Validation is a "service" rule, so it is correct to test this rule when testing this service. But you should not test this rule in the above layer ("API") because this is not a rule of this layer.

Even if we simplify and exchange the term "layers" for "objects", in your test you are checking on an object a rule that is the responsibility of another object, and that is why you got this strange feeling of being testing twice the same thing.

Your API isn’t doing much, it’s hard to decide what to test there. In these cases, I wouldn’t usually test anything. If I don’t know what to test, why would I write a test?

If, on the other hand, the project requires 100% test coverage, you can test whether the API returns an object Validation with the properties filled as expected, so you would cover the lines of this API without testing something other than its responsibility and maintain 100% coverage.

Other notes on your code - Exceptions

In Validatyodao.save() you are returning null as error code. Null is a bad error code.

If a method failed to do its job (failed to accomplish what its name suggests) it should either throw an exception or return an error code (if its design decision is to work with error codes instead of exceptions).

The semantics of null is "unknown value" or "unseen" or "nonexistent", and this in itself does not suggest an error (eventually the consumer can decide that this is a mistake given the context).

It may be useful to return null in a method find, for example, indicating that what was sought was not found; and then the consumer code decides what to do, for example doing nothing or making an exception if what was sought should, in the given context, be there.

Anyway, it doesn’t seem to me that you’re interested in error codes, since you’re actually making an exception when detecting one null. In this case, instead of returning null, the method Validatyodao.save() He should make the exception himself if he can’t do his job. Or even, he should not explicitly make any exception but only let propagate an exception that might prevent him from doing his job.

Completion

Fatally, when testing a higher layer, the rules of the lower layers will come into action. But a test should explicitly check only the rules of the layer or object it is testing, and should not check the specific rules of the layers below that in theory it does not know. Example:

API:

void facaAlgo() {
    if (condicaoRuim_X) {
        throw new ExceptionA("As condições estavam desfavoráveis na API");
    }
    Service.facaAlgo();
}

Service:

void facaAlgo() {
    if (condicaoRuim_Y) {
        throw new ExceptionB("As condições estavam desfavoráveis no Service");
    }
}

Now, when testing "service", I check the exception Exceptionb and when testing the API I check the Exceptiona.

If there was no logic in the API, then I could:

  • or not testing this API method;
  • or test only the happy case, validating the results in case everything works out, ignoring the specific rules of the layers below.

As to mocks, use as little as possible. They take a lot of work and can make our lives miserable, which is the opposite of the goal of automated testing. See in these responses a little more about mocks and other "stuntmen":

  • Great answer! Testing is already very tedious and still have to use mocks all the time. An observation of the DAO, it was defined that its function is only communication with the bank, any and all logical part is the responsibility of the Service

  • @Danielamarquesdemorais There was a time when I wrote the tests as the icing on the cake, the final touch to deliver a complete work. I still think that code tests are essential to delivering a complete job, but I don’t think they’re the icing on the cake anymore - on the contrary, they’re the foundation of it. Just like you, I find it tedious to have to write tests after the code is done - so I do it differently, write the tests before :-) I recommend studying TDD (read Kent Beck’s book, I think you’ll like it).

1

Don’t worry about redundancy. Integration tests are to test how the various components of your system work together.

Especially cases of failure. In reality the tests are the perfect place to see what happens if you pop an exception three or more layers below the service being tested.

Persistence and access to external services (SOAP, REST, JMS, etc) can be mocked, but I prefer to use as much as I can of actual services in integration tests.

Finally, beware of academicisms. Do what works for your project, without fear of breaking the rules.

PS.: If this is a job for your college, if teacher will tell you I’m wrong and give you a low grade.

  • Really but I worry mainly if you have good practices (since it is not my area), it is dangerous to create poorly done tests and then damage maintenance and the test result itself

  • More dangerous is having no test.

  • And what’s this "it’s not my field" story? You develop, you write tests... This thing of taking a person, or an area, and dedicating to testing exclusively is something that you think software is done on an assembly line. Writing tests should be part of your routine as a developer, you don’t have to rely on other people to test your code. You don’t have to be a TDD fanatic, but writing tests makes your life easier. QA should be a team responsibility as a whole.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.