Database/slow server

Asked

Viewed 1,455 times

2

I’m starting in the database area and I notice a great slowness in some banks I use in virtual servers.

In my case I access a windows that has a postgres and with an average frequency my scripts are taking too long to run...

I optimized them so that there are less high queries queries, I check the hard drives, ram, internet and are always ok.

I would like a recommendation to normalise this situation.

  • What is a high query query?

  • I think it refers to a query that returns a lot of data

  • A query that mounts a file . txt, that performs update and Insert in more than 5,000 lines

2 answers

0

In Postgres, you may need to periodically run the Vacuum and Analyze commands to improve bench performance. See this documentation:

https://www.postgresql.org/docs/9.5/static/sql-vacuum.html

Another thing that can be verified, in this case applies to any BD, is the need to create indexes, usually in tables with thousands or millions of records, but as this depends on an analysis, I suggest that you also search for articles on this subject and make use of this feature sparingly because while Selects can get faster, Inserts and updates can get slower.

  • Man, thanks for the touches! As I said to nunks.lol I discovered a query in an application that used a lot of the bank, and in the end we optimized and this affected in about 35% of the comic’s recovery. Abs!

0


It’s a fairly general question. Ideal is that you constantly monitor performance statistics of your databases and the infrastructure that supports them, so you can measure the impact of performance drops and correlate events that may have caused them, so you attack the source of the problem.

What Genivan said in the other answer is fact, you need to keep your Vacuums and Analyzes running periodically, but this is already done automatically by Postgresql and in most cases the default settings are sufficient. A change in these parameters should be made with reasons, and for this it is necessary to study the specific case.

Some suggestions for you to better identify what happens to your bank:

  1. I don’t know which version of Postgres you’re using, but it doesn’t hurt to say: consider a version update.
  2. Learn to use the views of Postgresql statistics: pg_stat_activity to see the darlings which are being performed at the present time; pg_stat_bgwriter to see how the Postgres memory buffer is being used; pg_stat_all_tables (and ..._right) to see operations count, last maintenance and other statistics of your tables and indexes; pg_statio_all_tables (and ..._right) to view disk usage statistics of your tables and indexes, among others.
  3. Learn how to use log analysis tools like pgBadger and configure the parameter log_line_prefix Postgres to generate useful metadata for analysis. You can use the parameter log_min_duration_statement to set a time of query which it deems acceptable, so that slower executions are automatically recorded in the log.
  4. Activate other log parameters that can give you a light. Some examples: log_checkpoints helps you identify the health of your buffer, log_lock_waits tells you if your darlings are competing with each other, log_temp_files informs you if many temporary files are being generated by darlings, which would indicate insufficient or poorly configured memory.
  5. Constantly collect resource usage from your server: CPU, memory, I/O, context exchanges volume, memory volume allocated in filesystem cache... Several tools can do this, my favorite is the good old sysstat, with periodic performances of the collector sar scheduled in crontab. I don’t know if there is equivalent for Windows.
  6. Consider using the module contrib pg_stat_statements, to collect statistics from your darlings in real time.

In possession of this information, there are several things that maybe you end up discovering about your use of Postgresql, for example:

  • The slowness begins to happen after the execution of a query which returns many records, concomitant with an increase in the volume of I/O, and improves over time: it may mean that the query offender take all the shared_buffers for her and so all the others need to use the disk more intensively to accomplish their work. It may be necessary to increase the parameter, or rewrite the query, partition tables, etc. to use less memory.
  • Containment always occurs at the same times: there may be some scheduled task that takes all the bank’s resources for it, or manages Locks on objects causing your other darlings await your release.
  • The waiting time for I/O increases significantly without necessarily a large workload to perform, and your storage solution is shared with other servers: another of your Vms is using this physical feature, leaving nothing for your Postgresql.
  • The volume of temporary files generated by your darlings slowness is very great during slowness, which in turn occurs "at the end of the working day", before any purging routines: the parameter work_mem may be undersized, requiring disk usage to handle large volumes of data when tables are more "full".
  • Queries correct in well indexed tables are preferring 7th scans instead of index scans: may be that the bank planner is with very old cardinality statistics, evidence that there are not many Analyzes being made in this table.

There are infinite possible causes when it comes to the slowness of databases, because there are many factors involved. My intention with this reply was, above all,, make clear the importance of collecting performance statistics and getting to know the environment with which they are working.

  • Man, thank you so much for the answers and the touching. I discovered that I had an application that used a querie that overloaded the bank, but I don’t know how it didn’t "burst" the machine’s memory... Anyway, it was reworked and improved about 35% of the comic’s performance. Thanks again.

  • Alone hardly Postgresql will burst the server memory because of a query, unless it is misconfigured, because what to pass the work_mem becomes temporary file on disk. Performance drops, but execution continues. Glad you found the reason for the problem there =)

Browser other questions tagged

You are not signed in. Login or sign up in order to post.