How to integrate microservices?

Asked

Viewed 2,737 times

37

The idea of microservices is good. But I don’t really understand how to solve certain problems. Maybe the problem is how it is "sold". I don’t see much to say when to use it or when to avoid it. It seems that everything can use the technology without too many bumps. But nothing is that simple and universal.

The application itself is quiet to understand, but wanted to understand about database.

In reading it does not seem very complicated, it is possible to ask for data from different databases and put together everything. You can ask if this is the most efficient, but I don’t see very strong complicators, at least not more than the complicators that microservices bring.

I understand that certain activities of a solution are isolated and can be an easy microservice. But others seem to me too integrated to be separated. What’s more, what may look great on its own, but then it looks like a case that needs an integration.

Specifically I think of transactions. What is the solution when you make a sale and must update several data tables? Does everything have to be in a single microservice? But there are a number of tables that need to be in another transaction that involves other tables. It seems to me that all tables should be together, even if not all of them relate directly. That is, you break the application in specific services, but still having a single point of failure and maintenance, it has its advantages, but it is not so great.

One solution would be to create a transaction system in the application that guarantees ACID in several databases. But this is very complicated. Does it only work where the CAP is admittedly viable?

Anyway, do you have a solution to micronize the database when ACID is needed? Only works with some database technologies that support this? I’m mistaken in some assumption of mine?

  • What would be CAP?

  • 1

    @Dherik actually doesn’t have a question about it here on the site, it would be useful.

  • 1

    Hello @Maniero, I added one more point: synchronous message. This ensures a bit of consistency between micro services when you don’t want to deal, at that point, with eventual consistency.

3 answers

26


I think leaving for a distributed transaction is the last case using micro services.

I understand that certain activities of a solution are isolated and can be an easy microservice. But others seem to me too integrated to be separated.

It is likely, then, that they should not be separated.

What is more, what may seem to be great in isolation, but then it looks like a case that needs an integration.

For this reason, every micro-service needs to have a well-defined and isolated purpose from others. When a micro service needs one another very much, it’s probably because they should be one single micro service.

For these and others I am a supporter of Monolyth First: only after you meet well the business rules and some domains of your application, the team will be able to make the separation in a quiet and unannounced way. We have to abandon the idea that a monolith is a dated architecture because in many circumstances (in most, I would say) it is not.

However, exceptions must be made. For example, if you are working with a large team (more than 10-15 people) and the team has a good command of the business rule, a monolith in this situation can be a mistake, as there are many people working on the same code base. In this case, going straight to micro services can be a good strategy, where people can be divided into smaller teams and each team can focus on a different micro service.

Specifically I think of transactions. What is the solution when you do a sale and must update several tables of data?

If you are talking about several tables in different databases, the idea with micro services is that each micro service has its own database and not shared with other micro services, each micro service is able to converse in a non-transitory way with the other micro services. The last thing you want in a "micro services architecture" is a distributed transaction.

This technique is known as eventual consistency. Your micro services need to be prepared to be in an inconsistent state for some time and eventually will be consistent.

An example

You want to save one person and its access permissions. The person is saved in a micro service ms-pessoa and permissions on micro service ms-permissao.

Instead of saving a person if your permissions are also saved in the other micro service, you will save pessoa and warn the micro service permissão to save permissions. The ms-pesssoa will return that everything is "ok" for the user while saving the person even without an answer that it went all right with ms-permissao.

The idea then is that the ms-pessoa access your permissions in ms-permissao, prepared to find (or not) the permissions and know how to handle it in your own way.

Of course something can go wrong when saving permissions. Here comes one of the difficulties when using a micro-service architecture. In these cases the solution (or prevention) presents itself in different options:

  • Solve the problem in ms-permissao and try to reprocess the message with permissions.
  • Give a "rollback" in the save person upon realizing the problem. This action may be to simply remove the person or change some kind of status that she has.
  • Mark the person with an "inactive" status until you’re sure it’s all right on ms-permissao.
  • Manually intervene in messages to solve the problem.
  • Send a message synchronous: also known as PRC (Remote Procedure Call), it can also be done via messages, by a request HTTP (taking the proper care), etc. The call occurs to the permissions service and awaits its response before continuing the creation of the person. It’s not the expected way to communicate with micro services, but it can help you in some cases that certain micro service needs to do something for you before continuing its processing and you don’t want to deal with the eventual consistency. PRC abuse is usually a symptom that your micro services are more separate than they should be.
  • If the micro permissions service ms-permissao belong to another company/system, in which you have no control and do not feel enough confidence, it may be necessary to adopt the concept of Poison Message Processing to deal with possible integration problems.

And so on.

Many of the above solutions are only viable with the implementation also of a DLQ queue (Dead Letter Queue) for the queue that presents any problem in the delivery of the message. Thus, problematic messages go to this queue and are waiting for a decision to be made: send the message back to the queue, fix something in the message and send the message to the main queue, use the troublesome message itself to do the rollback previously mentioned, etc.

This example of user and permissions is very critical, maybe they should not be separated. But it is interesting to show a more complicated scenario.

Everything has to be in a single microservice?

Then we would always be doomed to having monolithic systems :).

Alternative: as Business Transactions

As Business Transactions is a technique to create a chain of events that includes flows with events to undo actions. Such as rollback, but implemented manually.

Let’s go to a new example. Imagine you have 3 services: ms-pedido, ms-estoque and ms-pagamentos.

User makes a new request and event PEDIDO_CRIADO_EVENTO is sent. The ms-estoque and ms-pagamentos process this event. Everything happens well in the ms-pagamentos, but the ms-estoque check that it does not have the product in stock. In this case, the rollback could occur this way:

  • The ms-estoque sends the event PRODUTO_INDISPONIVEL_EVENTO,
  • The ms-pedido and ms-payments read the new event
    • The ms-pedido will cancel the order
    • The ms-pagamentos will extort the payment

At the end of the day, I only have two options?

Developers often think of extremes: a big monolith vs a lot of micro services exchanging messages with each other. This is a mistake. We don’t have to sell ourselves to any of these extremes.

We can have larger applications in the same system to take care of some transactional problems that we don’t have the time/money to deal with otherwise and, whenever it fits, independent micro services with well-defined functions.

Something important to avoid the perpetuation of the monolith is to have, from the beginning, an infrastructure prepared to deal with micro service. If you want to use in the future an architecture that includes micro services, using Docker, Kubernetes and etc, you and your company will already be prepared for this.

  • 1

    I guess you know what CAP is then :) In short: either accept the eventual consistency or create a system of distributed transactions? I liked the answer, because it matches everything I know. I do not yet know if that is the answer I am waiting for because I wanted to see if there is any solution other than that. Maybe not, then the answer is 100%, let’s wait.

  • Hehehe, I guess I just didn’t know the term. Later I will try to improve the answer, but at first these are the two possible options even.

  • 2

    I really liked that answer. Very balanced.

  • Hello! I would like to understand the reason of the negative vote to improve my answer make it more useful.

  • @Maniero, I added another new solution: Business Transactions. It’s a concept on top of micro services. I don’t know if this is what you expect, but I thought it was interesting to add.

  • @Dherik I see after calmly, I am well without time these days, nor could manage the Bounty.

Show 1 more comment

13

Undoubtedly one of the most difficult parts of a microservice architecture is the data. I agree with you when you say that it is not well "sold".

Recently I also fell into the same line of reasoning that generated these same doubts.

There’s a website that helped me understand how microservices can communicate: http://microservices.io/

In fact, you’re not far away, but you still need to think about some other concepts that you didn’t mention. One of them is immutability. Another would be to segment a process by stages (or stages) instead of all or nothing, and ensure the success of each stage.

Let’s focus on the transaction problem. The idea is that you don’t work with two Phase-commit. In addition, a q solution has been well used is messaging, mainly with apache kafka.

Take a look at the issue of event-oriented microservices with Kafka, mainly with a new feature q ensures that an event is only processed once.

Let’s go to a more practical example. I will propose one, but if it still doesn’t fit, you can propose and we think together.

After a sale is completed, you need to generate two tickets and a invoice (q will be issued pro Sefaz later). If it is not possible to generate the tickets or the note, you should not complete the sale, since q need to guarantee that a sale has tickets and a note. Here we can segment by step: step 1, complete sale. After this, the sale microservice sends an event and the boleto microservice is notified and generates the boletos, and so on. To ensure some consistency, if it is not possible to generate any of the billets (for example, a pk error), the "process" of completing the sale must be incomplete and notify someone to be solved. In case there can be no way a sale without billets, return the status of the order (cleared transaction). In other words:

  1. order 1 change status to 'processing' and after the sale commit only, switch to stage 'step 1 of 3 complete'
  2. Boletos microservice receives the event, first creates boleto with some 'processing' status (this ensures that no other process considers this boleto as valid until the whole process is completed)
  3. microservice of boletos tries to generate second boleto and fails, must send event to microservice of sale to take out of processing status, for q be undone what happened lah and report error. Here, about undoing the q was made in the sale microservice, the idea there is that has worked with immutability, that is, instead of changing records q is not the sale itself, create new records tb with a 'pending' status somewhere else (temporario maybe) or even change the records (which is not usually advisable).

At that end, if technically allowed to use immutability, and there were errors, you will get inconsistent records q can be deleted later.

There is still the problem of competition, q can be dealt with in some ways (timestamping, version, Trigger or system)

The point really is that you cannot have the C, A and P of the CAP, you have to choose 2, there is no way. But that doesn’t mean that you can’t get as good a consistency as ACID offers. The point is, it’s going to take more work, no doubt. But you will have the possibility of a much more segmented, polyglot and reactive system. Features that are worth a lot over time.

Perhaps the example I addressed is inadequate or even very simple to solve/incomplete. But as I said, tell us in detail your scenario so we can think together.

  • This site of Richardson is very cool and also use as reference. There a topic with this problematic of Patterns to solve the issue of data. The section calls "Data management"

  • So migrating to microservice needs to do all the work to model system failure at each and every point and also a whole web API scheme to keep exchanging message up and down?

  • 1

    "... do all the work to model system failure at each and every point..." - Not always. There’s something else I forgot to mention that they call "Transaction Boundaries". It is possible that there are situations where it is impossible to live without strong/true consistency (ACID 'C'). In these cases, the only solution I found (and q was recommended) is to join these routines into a single microservice. But in general, you need to "model failures", q can be quite painless if done with planning. Alias, this is another feature of microservice: it takes planning.

  • "...a whole web API scheme to keep exchanging messages up and down..." - Not necessarily. That site (microservices.io) shows the Patterns. One of them even the site calls anti-pattern: make this direct exchange in the bank for triggers. Ideally, services should be independent, but that doesn’t mean it’s the only way.

  • would have some code on github to demonstrate this solution or something close to that? would help a lot to understand the solution and discuss

  • @Andrécarvalho I have nothing ready that I can share. I know that the rocketseat people have done something about it. But, if you want, we set up a 4-hand Poc and then put it in the github.

Show 1 more comment

4

Migrate to a microservice architecture may not be such a simple task and therefore requires a lot of planning and clear motivations. Normally this migration occurs when we have a large monolithic application and we perceive in it indicative that this change may be necessary.

Examples of this are the difficulty in scalability and updates of specific points that compromise the entire application. Faced with these problems, the restructuring of the application for a more granular architecture, such as microservices, may be the most appropriate solution.

  • hello @Réulison_silva, has arrived to port a large monolithic application in production for Microservices architecture or was still under development?

Browser other questions tagged

You are not signed in. Login or sign up in order to post.