Working with high processing load in a table

Asked

Viewed 546 times

3

What are the possible strategies of database modeling in a scenario where one has a specific table that receives a gigantic load of statements of insert, update and delete, in addition to high data processing queries, requiring thousands of records at a time?

Since the table tends to get bigger and bigger and the simultaneous large-scale requests can make the bank temporarily inaccessible (causing constant timeouts), Is there any way to deal with this type of scenario? Whether in the application or by modifying the table structure or ER modeling.

  • 2

    1 - Do not use any relational BD but Nosql. (Taking into account carga gigantesca, alto processamento, milhares de registros)

  • 1

    @rubStackOverflow, the question is related to possible strategies using an ER model with SQL Server. Unfortunately it is not an option today to modify the bank’s technology to Nosql for reasons of force majeure.. =/

  • 1

    Got it @Vinícius

  • The timeouts you can’t solve by tunning into sql server memory? I had a problem of constant timeouts and, at the time, that was what solved. I left 2 GB for the operating system and the rest for the bank. example

1 answer

2

Good morning Vinicius, I am no database expert, but I will try to help by indicating actions that I believe will help you.

1) Consider partitioning your table.

If your table has a very large volume of records, in the millions, consider partitioning it. This way you will not need to scan your entire table every time you make a SELECT. You will only scan the partition where the record is. You didn’t say which version of your SQL Server, but follow a link that can help you -> http://www.devmedia.com.br/particionamento-de-tabelas-no-sql-server-2008-r2/24237

2) Consider duplicating some information to decrease competition.

For example, if you have the same information in more than one table, you don’t need to always go to the same table to get that information, you can search for the same information in the other tables depending on your need. This will also decrease your amount of INNER JOINS. But remembering that when duplicating information, one must be careful when changing and deleting these records, most of the time you will have to delete from all tables where you have duplicated such information. a little more about competition -> http://www.devmedia.com.br/controle-de-concorrencia-entre-transacoes-em-bancos-de-dados/27756

3) Consider scheduling processes.

SQL Server allows some processes (JOB) to be scheduled. Analyze if there is any routine that can be scheduled to be executed in alternative hours, at dawn for example. https://fabrizziocaputo.wordpress.com/2011/09/01/sql-server-basico-4-agendando-um-job-no-sql-server/

4) Consider creating queues.

You create in your application a scheme of processing queues. For example: If there is a report too heavy to generate, and several people are requesting the same report, make those requests fall into a queue, and be processed one by one. This way you don’t burden the bank with just processing this report.

5) Consider creating indexes.

a little more about indexes in sql server -> http://www.devmedia.com.br/indices-no-sql-server/18353

Like I said, I’m no bank expert, but maybe these tips will suit you

I hope I’ve helped!

Browser other questions tagged

You are not signed in. Login or sign up in order to post.