Slowness with group_concat in mysql

Asked

Viewed 204 times

1

I am developing a system, that in some parts, when I will display the data to the client, I am using in my query group_concat, because I need to display for an item the items related to it in the same line. This worked very well when it had few records (2000). Now that I have been doing an inclusion test of many records (200000) it takes forever to display the data. Does anyone have any idea what I can do to make it better?

I’ve created an example of what I’m talking about

http://sqlfiddle.com/#! 2/06bb38/1

  • You can take the execution plan of your query and post here please?

  • @Jorgecampos did not understand what you meant, will the link to the squema itself serve?

  • 1

    Have a look at this link: http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html and more specifically: http://dev.mysql.com/docrefman/5.5/en/explain-output.html

  • I imagine that the link http://sqlfiddle.com/#! 2/06bb38/1/0 has this that you asked for.

  • 2

    No, you don’t have to. The execution plan is an analysis that the database makes in its query indicating which indexes and how the engine is processing the data. For example, if you have a NAME field in the table and that name is not using an index means that in a query with a Where in the NAME field there will be a 'full table scan' which means that the engine will check all records to select only those that meet the condition. Look at this execution here. That’s what you need to run on your bank. Then post the result here. http://sqlfiddle.com/#! 2/06bb38/17

  • From what I understand is what’s at the bottom of the page. Other than that I have nothing. But I’m already changing some parts of this system to eliminate this middle table. Valew

  • 2

    http://sqlfiddle.com/#! 2/06bb38/18

  • Use the explain command before the select command for you to see, in the application table and in the table aspect your query is doing full scan, and this will always increase the query return time conforms to the records increase.

Show 3 more comments

1 answer

1

I did a test here and saw that the query is doing select_full_join, ie this giving select in the entire table p/ generate this query, what I did is the one of the execution plan that Mr. Jorge Campos this referring.

Possible solutions: 1 - Write your query in order not to do the full scan, how? change these left Join.

select 
a.id AS aspecto_id,
a.name,
group_concat(ap.name order by ap.name ASC separator '||') AS aplicacao
from aspecto a, aspecto_aplicacao aa, aplicacao ap
where 
aa.aspecto_id = a.id and
ap.id = aa.aplicacao_id
group by a.id , a.name
order by a.name;

only that it will fetch full results, without any null, if you need the null results just a scramble there in the query.

Possible solution 2: Do a stored Process for this query, this will arise gain in queries in 99.9% of cases, but of course, if you keep doing the full scan, maybe this will only remedy for now, and it will cost you memory and processing to continue with good performance.

Conclusion, I suggest you make the necessary adjustments in the query, and if it still gets slow, it sends p/Procedure so you will be using both solutions, but I tell you keep the focus on the way you write the conditions, always use pk, fk or indexes, and how you are doing ordering by "name" can make an index in this attribute ai also p/ win in performance. (index type fulltext)

  • I understood what you said, but as you said yourself, in case I need the results with null values in this case will not work. I have not yet changed much this part of it, because it still needed so much cost of the bank, so I will need to make the appropriate changes, for now I am doing other features

  • 1

    @Marcelodiniz whenever you make a query searching all the records of the table will require time relative to the number of records, so I tell you better take only a portion/part making use of the filters/conditioning the result, and treat the null cases in the application, tip ^^

  • @Marcelodiniz the only measure that can be done, if you want to keep the full scan is index, is not ideal, but will appear effect. http://www.mysqlperformanceblog.com/2012/11/23/full-table-scan-vs-full-index-scan-performance/

  • I’ve already done that. It was the fastest to be done and as you said, it’s not ideal, but at least it’s something.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.