Multi-layered development and business rule

Asked

Viewed 98 times

1

I work with a team developing an ERP and am responsible for issuing reports, of which one of them runs a select in the bank that in tests took more than 2h running, which led to errors and a big doubt: "who was processing all the information?" And that’s where the problem began because the processing is not done by the server application (web server) but by the database (in this case Postgresql).

The correct is to perform all possible calculations directly in the SQL query (seeing that all the functions for calculations are in the database itself) or just search the database’s primitive data and perform the entire operation via code by the server layer?

If you have references it is of utmost importance.

1 answer

2


Depends. If the data is all in the database and it makes sense to run there to give smaller results, it is relatively simple to do them, then it should do there, otherwise if the most important is to take the raw data where all will be needed in the report, the calculations are secondary so should do in the application.

No magic, no recipe for cake. I know that nowadays a lot of people who work in IT think you just have to memorize some rules, some solutions and everything will work out. If that were true, we’d be running out of jobs soon. To know the right answer to this depends on a lot of experience, deep knowledge of computing and understanding all the details of the context to give a definitive answer. In fact not so definitive, only as long as the same criteria and volumes last.

It’s not a question of where you do it, but how you do it. It’s almost certain that it shouldn’t take 21 hours, it’s probably being done wrong and the right solution is not to decide where to do it, it’s to take less time. There are a number of techniques to reduce this time, need to analyze the query(s) to find out where are occurring bottlenecks, and then understanding the functioning of the database, change the queries, the settings to meet the demand of this case, without disturbing other cases. Is it simple? Of course not, so we have jobs and earn well.

From what you’re saying it seems like it won’t even make a difference where you’re running, but it may be a misguided speculation of mine for the lack of information on the question.

Just remembering that calculation itself is usually a tiny part of the processing, the standard of access to data is usually much worse, especially me reports.

In some cases the available hardware is not suitable and causes extreme slowness.

There are scientific methods of observing occurrences to isolate the problem, eliminate interference, focus on the specific problem and find the root cause, but this is a general scope and not for infrastructure development or operation.

In some cases it is the fault of existing modeling. What happens most is that people do not understand the power of the index (in others the problem is their abuse).

  • I am in total agreement, sorry for the lack of information. They are not really simple questions and probably the possible solution is temporary. O Case I am performing a Stock Position report validating, batch, product, deposit, stock, all branched into N’s tables. I was able to find unnecessary information between joins in the tables which reduced the time, but not enough. The same question was about the processing, since for some reason the qry that is being processed (independent of time) takes 1 entire processor core (A dedicated server). Obg.

  • Try running an EXPLAIN ANALYZE in your query to check any bottlenecks and possible index shortages in your tables.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.