How to find a pattern to maintain the same data modeling between a JSON, POJO, and JPA object?

Asked

Viewed 236 times

3

To facilitate the understanding of my question, follow the following example::

A POJO:

public class Person {

    private String name;
    private String location;

}

String JSON:

String json = {name:"Jose", location:"Eslovenia"};

Serialization:

Gson gson = new GsonBuilder().create();
Person p = gson.fromJson(json, Person.class);

Insert in database:

 Session session = HibernateUtil.getSessionFactory().openSession();
 session.beginTransaction();

 session.save(p);
 ...

I note that in several examples there is always a field pattern from both a json object to a java object and a java object to the database fields.

Doubt is not at code level but conceptual:

1 - In the examples with some fields it works ok, but in a real system, it is possible to maintain such standard in the 3 levels (json, java and database) with much more fields and complexity?

2 - If it is possible to maintain, working in this way is considered a good programming practice to follow this type of standard?

3 - Is there any case that such a pattern would be impractical?

4 - Database modeling should be reflected in the POJO and the POJO should serve as the basis for the json object?

Any other consideration outside the questions is also very welcome.

2 answers

2

I see no problem in maintaining this pattern in simpler cases like CRUD, but in more complex models I have used one object that represents the query in the database and another to transfer data to the view, this way I can create a query that has the best possible performance and use a DTO to represent the JSON needed for communication with the view.

Base modeling should not restrict what you should do in code, since in general we use relational bases and work with object-oriented languages, we would stop taking advantage of some features as inheritance if we did.

In practice I have used Dozer to perform the transformation from POJO or VO to DTO, which transfers data from one class to another by convention (fields with the same name) or configuration described in an XML.

2


Imagining that this JSON will be used in an API:

1 - In the examples with some fields works ok, but in a actual system, it is possible to maintain such standard in the 3 levels (json, java and bank) with much more fields and complexity?

Possible everything is :). In small projects, I believe that you can even maintain this structure and that it will work very well for a small monolithic system, being quite productive with it. But soon you’ll realize this won’t work with time.

2 - If it is possible to maintain, working in this way is considered a good programming practice following this type of standard?

I don’t see problems if this is making your life easier. As I said before, for simple cases it will work and will bring a lot of time. But if this system tends to grow a lot and gain complexity, this model will have an expiration date.

In the first case you do not want/can obey this behavior, you will begin to leave this pattern, and it will stop making sense and bring headaches.

3 - Is there any case that this pattern would be impractical?

Citing a few examples:

  • Imagine that you have a POJO Pessoa (I believe in your case, the POJO is a JPA/Hibernate entity, but let’s call it POJO) with 15 fields (name, date of birth, sex, age, marital status, etc). Among them, 5 are mandatory. Does it make sense for the 15 to always be in your request/answer? If you have fields with 100 or up to 1000 characters, do you always need to return them? And if you have sensitive information about Pessoa (primary key, document, etc.), you will need to control with Gson annotations in the POJO not to send it (you may have cases you would like and others you don’t). And the list follows...
  • If you need to change your POJO (separate a table into two other tables, move fields to another POJO, change the name of a field, etc.), your JSON will also change... what impact will this change have on your application that sends/consumes JSON every time this occurs?
  • If you need to provide a micro-service that saves Pessoa for other systems, it is likely that the attributes of Pessoa is not ideal for understanding this service for other systems that will communicate with it.

4 - The modeling of the Database should be reflected in the POJO and the POJO should base for the json object?

The model that is simple and works well, in my view, is:

  • The service that will provide this JSON is represented by a DTO, which will be serialized/deserialized. It will be used only to be filled with the information necessary.
  • The DTO can not access the POJO, would be in a different project. Reason? If you tie your POJO to the DTO you are held hostage by the changes of the POJO, even if to a much lesser degree than proposed in your question. In my view, the changes of the POJO should never reflect directly on your DTO.

As a suggestion of organization and without leaving too complex, I think of an organization with three projects:

  • project-api (depends on the project-service):
    • Contains: Dtos.
    • Responsibility: fills and serializes/deserializes the DTO with the information received by service. You can use Vos for this communication with the project-service.
  • service project (depends on the project-domain):
    • Contains: services (for business rules) and Vos.
    • Responsibility: It invokes the search/save/update/remove methods of the project-domain by passing the POJO and using Vos to communicate with the project-api.
  • project-domain:
    • Contains: Pojos.
    • Responsibility: save the POJO to the database, containing the search/save/update/remove methods.

It can be improved, this is just an example.

About this nomenclature of VO, POJO, DTO, etc, do not worry, this is not well defined at all. Just create a pattern for your project by using each nomenclature.

  • First of all thank you for the very detailed answers. I found the idea very close to what Fabio said earlier, practically a consensus, which is good. What I understood was that in order not to have such a cast and work with a possible modularity, then a DTO would receive the data, a POJO would be used for persistence and a VO to return the data? In the previous answer, it was suggested to use Dozer to make an automatic data assignment with small customizations between POJO and DTO. In that sense, do you have any suggestions?

Browser other questions tagged

You are not signed in. Login or sign up in order to post.