2
- What would be a suitable configuration for autovacuum act in this table ?
We have a "sequence" table, does what the sequence
postgres should do, but perhaps by using other banks the development opted for this approach, which is to use a table in the bank to save the last id inserted in other tables.
An example of the sequence table:
id_entidade no_sequencia
------------------------------- ---------------
CLIENTEGESTAO 33325146
VEICULOREVISAO 14541831
DADOSBOOK 11627492
HISTORICOQUILOMETRAGEM 9992701
RESULTADOINTEGRACAO 4089476
CAMPOSREGISTROINTERFACE 2892807
REGISTROINTEGRACAO 1731981
ACAO 1585661
ENCAMINHAMENTO 1263190
REQUISICAOOS 948967
MOTIVOEVENTO 916872
EVENTO 818115
VEICULO -35041
CLIENTE -64414
The next ID of any record in the database will be the "no_sequencia" + 1, so this table receives a high amount of Updates, about 100 per minute and very rarely some INSERT, one per year maybe less, my doubt is autovacuum account, today I use the following configuration:
`autovacuum_vacuum_scale_factor` = 0.20
`autovacuum_vacuum_threshold` = 50
When you get to 51 updated tuples will clear the table if autovacuum is available, in a more general context autovacuum runs every 30 minutes or so, if we ignore this table:
relname n_live_tup % live n_dead_tup Tamanho Registros Configuracao last_autovacuum autovacuum_count
------------------------- ----------- ------- ------------ ---------- ----------- ----------------------------------------------------------------------------------------------------------------------------------------- ------------------- ------------------
sequencia 191 65 99 752 KB 191 {autovacuum_vacuum_scale_factor=0.20,autovacuum_vacuum_threshold=50} 17/01/2018 19:41:31 13806
campanha 3541 98 67 3400 KB 3821 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 19:40:49 194
sms 2071256 99 1026 773160 KB 2071070 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 19:24:31 20
campanhaexecucao 1948 96 61 136 KB 1980 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 19:05:48 10
telefonema 8957384 99 1792 1086424 KB 8957050 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 18:22:45 17
email 1059515 99 3776 945368 KB 1056970 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 18:11:53 3
oportunidade 698662 99 317 105992 KB 698440 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 18:06:54 1
interesse 698254 99 550 93936 KB 698043 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 18:06:46 1
encaminhamento 8277408 99 1998 1187264 KB 8277230 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 18:03:24 39
evento 3689441 99 5672 2376448 KB 3688780 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 17:06:11 36
sessao 1736 94 102 216 KB 1736 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 16:22:44 18
acao 14931180 99 6635 4250392 KB 14924700 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=25000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=50000} 17/01/2018 15:25:32 10
parametroempresa 349 81 77 48 KB 348 {autovacuum_vacuum_scale_factor=0.20,autovacuum_vacuum_threshold=50} 17/01/2018 14:22:41 22
tipoevento 4565 99 19 3656 KB 4565 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 14:08:40 3
usuariotipoevento 64254 98 992 4544 KB 64270 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 14:03:41 5
registrointerface 4050719 99 1194 549800 KB 4049100 {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=10000,autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=20000} 17/01/2018 13:30:35 2
regraleadmontadora 117 90 13 48 KB 117 {autovacuum_vacuum_scale_factor=0.20,autovacuum_vacuum_threshold=50} 17/01/2018 11:58:37 6
usuarioformacontato 78593 98 1389 5944 KB 79230 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 11:22:39 3
arquivointegracao 1843 93 134 296 KB 1787 {autovacuum_vacuum_scale_factor=0.05,autovacuum_vacuum_threshold=100} 17/01/2018 02:00:50 12
In the example above we saw that the n_dead_tup
passed 51, because the autovacuum is with a Sleep 60 seconds.
name ---------------------------------- setting ----- unit
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60 s
autovacuum_vacuum_cost_delay 20 ms
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1 kB
- What would be a suitable configuration for autovacuum act in this table ?
If you need more information please inform.
In my view, it will depend on your table’s competition, n_dead_tup > 50 default value; autovacuum_vacuum_threshold(55); autovacuum_vacuum_scale_factor (0.6); Reaching 60% it starts; autovacuum_vacuum_cost_limit (1); since your max work and 3 makes no sense to put -1; When you perform an UPDATE, your record is updated, correct? Negative. What is actually done is to insert another tuple into your table, with the same data as the original tuple, just by changing what you requested in UPDATE. But my tip is to keep on experimenting taking into account Quence’s live;
– alxwca
The title is actually Postgre, as I had edited. The tool name is Postgresql, the abbreviation without SQL is just Postgre.
– David Alves
@Davidalves Postgres is a name often used to refer to this tool, I changed the title, but I see no communication problem in using Postgre, Postgres or Postgresql, thanks for your attention.
– Tiago Oliveira de Freitas