What is more efficient, a complex query or treat the results?

Asked

Viewed 257 times

5

I can do it:

    $sql = 'SELECT * FROM '
                    . 'data_companies '
                    . 'INNER JOIN data_companies_extra_infos ON' 
                    . 'data_companies.cod = data_companies_extra_infos.relationship';
   $rs = \MySQL\MySQLSelector::select_by_query_string($sql);

And get the result I want.

Or, I can do it:

  $sql = 'SELECT * FROM '
                . 'data_companies ';
  $data = \MySQL\MySQLSelector::select_by_query_string($sql);


  $sql_ex = 'SELECT * FROM '
                . 'data_companies_extra_infos ';
  $rs_ex = \MySQL\MySQLSelector::select_by_query_string($sql_ex);
  $data = [];
  foreach($rs as $row){
      foreach($rs_ex as $row_ex){
           if($row['cod'] == $row_ex['relationship']){
               $data[] = array_merge($row,$row_ex);
           }
      }
  }

And also have the result I want. Well, this is a simple example, I believe the first option is better, but in cases where there will be a complex query involving calculations, 4 or 5 tables and several filters for clause WHERE.

Which of the two is better in terms of performance ?

In other words:

I better do a giant and complex query, or do several simple querys and treat the results with PHP ?

PS: The function select_by_query_string() executes the query, handles and encodes the results and then returns an array.

PS2: Personally I find the second option easier.

  • Usually, one plays search service for the bank. I would say you should test, and test, and test, and test, and test....

  • 1

    I do not have approval for a complete answer (because I am from the POO itself). However, remember that DBMS is specifically designed to work with data. The question of indexes would also affect, the DBMS can use them to find faster the(s) requested record(s). Also, if you are bringing a huge amount of data to handle in the application, you will have time spent transitioning data from DBMS to the application, plus an exaggerated memory consumption with something that could be accomplished by DBMS.

  • 1

    I’d say it’s "relative".

  • You can do a test by placing a stopwatch. I would like to elaborate an answer but will not give now.

  • 1

    A detail that remained missing, his 2nd simple example is one of the solutions of the problem N+1. Remove JOIN and perform two separate queries to transition less data between the DBMS and the application. But in your case it is without filters. My answer above remains the same when filters are added.

  • @Gabrielheming is exactly that the 'Q' of me prefer to use the second mode, avoid fatigue in my bank, and also because my processing server is considerably more powerful than my DB, and also, my system processes in two separate banks the same action, one with the current state (for operational) and a somatic with all historical values that have been in all fields since forever, I think I’ll give a reformulated question.

Show 1 more comment

1 answer

3


I believe that the answer will vary according to the scenario.

The sgbd’s before making a query, try to optimize it by creating a plan to execute the queries.

I see you are using mysql. One way for you to test the performance of your queries is by using the explain of Mysql workbench. It is available in the query menu or by shortcut Ctlr + Alt + X

Through this feature you can see the execution plan that the sgbd used and the cost of each step.

So you can define what will be the best way to use your query, need to create indexes, etc. according to the scenario.

Tela do explain

Browser other questions tagged

You are not signed in. Login or sign up in order to post.