1
Here in the company we have PG and Mysql databases, these two banks are networked and are identical in both structure and data, where a "robot" mirrors them every 10 min.
So far so good... It turns out that now the bases are becoming giant (currently 1170000 rows in a table), and has reached a point where Mysql is unable to perform more queries or takes several minutes. The two databases are absolutely the same (tables, data, structure, relationships, etc.). When we perform a query in Pg the return is fast, not more than 4 seconds, but the same query that should return the same amount of data Mysql can not meet.
I am referring to a simple query, without Join, for example. Like the following:
SELECT idCdr, agente
FROM `cdr`
WHERE agente <> ''
GROUP BY agente
HAVING COUNT( agente ) > 1;
This same instruction, PG returns me in 3 seconds and Mysql takes a few minutes. I have already done tests on the banks' own Dashboard (phpMyAdmin and Pgadmin) and the same thing happened.
Does anyone have any idea what might be going on?
Dude, do you have indexes? In these cases where the bases are immense, one should have a plan to minimize this effect of slowness. Partitioning, etc.
– DiegoSantos
I have indexes, yes, in all tables. Direct queries to the specific index are returned quickly. Now a query that scans the entire table (as above), no chance...
– Alisson Pelizaro
This SQL (example) is wrong, you are making a Count of a field and grouping the result by itself. It’s like this in the system or you just got confused when composing the question here on Sopt?
– Andrey
True, I didn’t notice. But this was an example I created just to ask the question. I’ll correct there. Thanks Andrey.
– Alisson Pelizaro