8
I have an application that uses JPA 2 with Hibernate behind and for unit testing, I use HSQLDB in memory with Junit 4.11. HSQLDB is configured in a persistence.xml file in a META-INF folder.
My question is: How do I upload the clean database at the beginning of each test without having to manually call a bunch of "DELETE FROM BLA" or similar thing?
Currently I have problems within the methods @Before
, a call something like this:
EntityManagerFactory emf = Persistence.createEntityManagerFactory(persistenceUnitName);
The EntityManager
s produced are placed in a variable ThreadLocal
, to ensure that each thread does not manipulate the EntityManager
s from other threads.
In the methods @After
I call a entityManager.clear()
.
However, the test is unreliable. Sometimes saved objects disappear for no apparent reason. And often the cache of Hibernate deceives me by showing objects that are not persisted, but seem to be, and with that I end up abusing the use of entityManager.clear()
by precaution in places where this should not be necessary.
Someone has a better suggestion of some strategy to develop tests using HSQLDB in memory with Junit?
Not to be annoying, but already being: if you are accessing a database (even if it is in memory), it is not a unit test -- it is an integration test. Having said that, I’ve had success in the past using Dbunit for these tests, running appropriate operations in the setup/tearDown methods.
– elias
@Lias , whether or not it is a unit test for accessing a database in memory is debatable (and has been discussed for years). One of the counterarguments is that everything should then be called an integration test, since you depend on the proper functioning of the operating system, or Java itself. In short, I would not change the technology to suit the name. I would change the name, to suit the technology. I see a lot more value in testing with real items, than introducing Dbunit just so I can call my tests "unit tests" :-)
– jpkrohling
@jpkrohling the suggestion to introduce the Dbunit was to solve the OP problem, the tests would still be integration tests. = ) The counterargument that "everything should be called an integration test" is invalid, because the difference between the tests is not simply what it depends on to work, but rather the objectives of the tests. Unit tests aim: 1) get quick feedback, 2) get a good design by putting yourself in the user’s shoes, and 3) reduce our fear when making changes.
– elias
@jpkrohling To make unit tests in the case presented, the work would probably involve creating a new test suite, using mocks/stubs for contributors, in order to isolate only the code written in the actual unit (do not test the code of libraries and frameworks, for example), and so get the desired quick feedback (type, >1s is not fast). = ) Recommended reading: http://www.javacodegeeks.com/2012/09/test-driven-traps-part-1.html
– elias
@Indeed, I am well aware of the definition of unit tests, but I think the definition is often taken too literally. The main point is that only one unit of your code is tested, regardless of the number of things that happen behind this code. If a line of my code calls 1,000 lines of Hibernate, then I certainly want to test my integration with Hibernate, since a version change can affect me. Same thing with JVM, or with native calls. There are cases, of course, where it is desired to test only its logic, as in complex algorithms.
– jpkrohling
In time, I believe that none of the mentioned items are determinant to classify OP tests as unitary or integration: with DB in memory, they give a quick feedback, encourage a good design (the tests are the first consumers of the code)and certainly reduce the fear at the time of the changes :-)
– jpkrohling
@I believe that everything depends on the purpose of the test. If we are testing a specific component to ensure its proper functioning, even if this component resides in a "bubble" (the bank is "fake", the entries are fake, etc.), then we are doing a unit test. If we assume that two or more components work as they should (for example, after unit testing on each of them separately) and we want to see if they interact properly with each other, then we have integration tests. Response time has nothing to do with it (although in practice we want quick feedback).
– mgibsonbr
@jpkrohling without extending myself (even because comments are not the right place for this type of discussion), I believe that in your example the line that interfaces with Hibernate can be tested (Unit) using false inputs and outputs that correspond to what you expect from Hibernate. But the moment you are interested in how its component interacts with specific versions of Hibernate, so what you’re testing is the integration between the two systems.
– mgibsonbr
@mgibsonbr, true, this discussion goes far (it is one of the most discussed topics in the world of testing). The fact is that the purpose of the test is not to test the integration with Hibernate, but the test will certainly fail when something in Hibernate changes. And that’s a good thing.
– jpkrohling
I’m sorry if I’m being repeated something that can already be understood by the previous comments, but only to reinforce, when one test can influence another (either by the sequence of execution or due to competition) then the concept of unit test is threatened and problems like this will arise. Citing the answer from @Marcoszolnowski, there should be a new bank instance for each test.
– utluiz
Response time is quite significant in the medium term, because the consequence of a slow test suite is to stop running it, and the consequence is often to stop writing tests. I’m talking here, but been there, done that -- I am the first guilty of doing the same thing, often. = ) Maybe this is really a matter of opinion according to experience.
– elias