COMMIT inside a CURSOR

Asked

Viewed 133 times

1

I have a very large database that I have to copy information from. To decrease the use of memory I thought of inserting the information in parts, using a CURSOR and paging the data, thus carrying out the insertion.

But to do that I would need one COMMIT after insertion, that would still be within the CURSOR. How can I do this? Is there any way to decrease the memory cost of this operation without needing this solution?


Edit:

The table I have is LOG moving with a BLOB with the message (which is a XML compressed). The structure is similar to the following:

identificador     INTEGER  -- Agrupador
identificador_log INTEGER  -- Identificador único (Não segue uma sequência)
processamento     DATETIME -- A data em que o log foi gravado
mensagem          BLOB     -- XML comprimido com todas as informações da mensagem

The solution close to what I wish I arrived at was the following:

DECLARE
  dias INTEGER;
  indice INTEGER;
BEGIN
  -- Diferença de dias entre as datas de logs
  SELECT TRUNC(MAX(l.processamento)) - TRUNC(MIN(l.processamento)) INTO dias
   FROM LOG l
  ORDER BY l.processamento;

 FOR indice IN REVERSE 1 .. dias LOOP
    INSERT INTO TEMP_LOG(
      identificador,
      identificador_log,
      processamento,
      mensagem
    )
    SELECT tl.identificador,
           tl.identificador_log,
           tl.processamento,
           l.mensagem           
      FROM (
       SELECT l.identificador,
              MAX(l.processamento) AS processamento,
              MAX(l.identificador_log) KEEP (DENSE_RANK FIRST ORDER BY l.processamento DESC) AS identificador_log
         FROM LOG l
        GROUP BY l.identificador
      ) tl
     INNER JOIN LOG l
        ON tl.identificador     = l.identificador
       AND tl.identificador_log = l.identificador_log
       AND tl.processamento     = l.processamento    
    WHERE TRUNC(l.processamento) = TRUNC(sysdate - indice);

    COMMIT;

  END LOOP;
END;
/

The point is I have a lot of drives and a lot of files on BLOB so I can’t make this copy at once, otherwise the memory usage will be exorbitant and it’s this problem that I want to solve: Make the copy without consuming resources so that you can activate some usage alert.

  • you can peacefully use a commit inside a cursor, as long as it is not in the table used by the cursor, which is not your case, but your biggest problem is performance, right? you can use a bulk collect to improve performance, have tried this idea?

  • @Ricardopunctual I didn’t try. Can you shed some light on that?

  • yes, it would be something like this: OPEN seu_cursor;
 LOOP
 FETCH seu_cursor
 BULK COLLECT INTO table_para_inserir


 END LOOP;, where the fields the cursor must be the same as the "table_para_insert", or your type

  • if you have more data we can make a more functional example, but I believe that seeing this link will already understand the idea: oracle Bulk Collect

  • @Ricardopontual added more information

2 answers

1


Using the concept of collections it is possible to open the cursor and limit the amount of Fetch records if it is used within a loop, so it is possible to commit with a maximum amount of records for each operation. Using your example and committing 100 in 100 records would look something like this:

DECLARE
  cursor c_Log(pIndice integer) is
    SELECT tl.identificador,
           tl.identificador_log,
           tl.processamento,
           l.mensagem
      FROM (SELECT l.identificador,
                   MAX(l.processamento) AS processamento,
                   MAX(l.identificador_log) KEEP(DENSE_RANK FIRST ORDER BY l.processamento DESC) AS identificador_log
              FROM LOG l
             GROUP BY l.identificador) tl
     INNER JOIN LOG l
        ON tl.identificador = l.identificador
       AND tl.identificador_log = l.identificador_log
       AND tl.processamento = l.processamento
     WHERE TRUNC(l.processamento) = TRUNC(sysdate - pIndice);

  type TLog is table of c_Log%rowType;
  vLog TLog;

  dias   INTEGER;
  indice INTEGER;
BEGIN
  -- Diferença de dias entre as datas de logs
  SELECT TRUNC(MAX(l.processamento)) - TRUNC(MIN(l.processamento))
    INTO dias
    FROM LOG l
   ORDER BY l.processamento;

  FOR indice IN REVERSE 1 .. dias LOOP
    open c_Log(indice);
    loop
      FETCH c_Log BULK COLLECT
        INTO vLog LIMIT 100;

      forall i in vLog.First .. vLog.last
        INSERT INTO TEMP_LOG
          (identificador, identificador_log, processamento, mensagem)
        values
          (vLog(i).identificador,
           vLog(i).identificador_log,
           vLog(i).processamento,
           vLog(i).mensagem);

      COMMIT;
      EXIT WHEN c_Log%NOTFOUND;
    end loop;
    close c_Log;
  END LOOP;
END;

0

The data you need to copy can be grouped by some attribute. Ex: Some date?

If possible you could perform the inserts by grouping the data by month or year.

INSERT INTO table
(column1, column2, ... column_n )
SELECT expression1, expression2, ... expression_n
FROM source_table
[WHERE YEAR(Data) = 2017];

INSERT INTO table
(column1, column2, ... column_n )
SELECT expression1, expression2, ... expression_n
FROM source_table
[WHERE YEAR(Data) = 2018];
  • It does not solve the problem, so I will have to run it by hand. The problem is not grouping. Grouping in query i know. The problem is to perform in parts without needing my intervention every step

Browser other questions tagged

You are not signed in. Login or sign up in order to post.