Stored Procedure with low performance

Asked

Viewed 388 times

2

I’m having trouble at the Oracle, where I have two procedures that run one after the other. Where in the first procedure, I have a cursor that does insert on a table. This Insert has approximately 30 million records, which are inserted in a time quick and acceptable.

But when the next project is executed, the one that does the update, is rather delayed. In a last execution it took about 38 hours. And this has only one cursor update, smaller than the first and when executing the cursor separately, it runs in less than a minute.

One detail is that when the procedures are being performed, and I see that the second procedure is "locked", if I stop the execution, I give a analyze table, and run only the second trial again, it runs normally, taking about 1 or 2 hours to complete.

Does anyone have any idea how to help me?

  • 1

    Can you pass something from the code of your procedures and related tables? How are their indexes ?

  • Passing something of the code is kind of complicated because it’s the company’s business rule. But basically in the first procedure it is an Insert cursor, and in the second one of update. The indexes are ok. The problem is that when running automatically the second update process takes years. If I run each one.

  • First: need to do the UPDATE this way? There is no way to already insert the data with the correct values or then give a UPDATE general? According to: the problem may be the cumulative transactional load. Have you thought about updating in blocks, making COMMIT each N records?

  • @Ericosouza Anonymize the code. You don’t need to post here exactly your company code, but some reduced code that represents what you’re trying to do.

  • @utluiz Yes, I need to update this way. Because only some records are updated as required by the business rule, and as per its conditions. The commit is performed every 100,000 records. I’m also using Bulk Collect

  • Are you using SQL PL/SQL? Because when you use an SQL cursor inside PL/SQL, the information needs to be "filtered" from the SQL side, then copied from the PL side and then the data will come back from PL -> SQL. Also, PL needs to stop, wait for SQL response, etc... even if from a "human" point of view, everything seems right, inside the computer, and very heavy!

  • @Peter, one thing I’ve been talking to my co-workers who understand better than I do, but they don’t know the solution for sure, either, is this, that after the insertion of the 30 million record in the table, Oracle does not seem to understand that this table already has records in it, so that the update is performed. So if we stop the process and make an ANALYSE TABLE, and run the proc again, it runs normally and fast.

  • 1

    @Erico, or another option, Orale knows he has the data, but he hadn’t prepared the index. So, we can imagine that you are doing an "update" on a table that does not have the index ready, which takes a long time. A look at the Oracle documentation shows that when you will do Insert, and update each time it will modify the index. It seems to have a "tips", indicating that to accelerate it may be better to destroy the index, do the Insert/update and then re-build to the index: "Dropping index before a mass update and Rebuilding them Afterwards can improve performance significantly".

  • @Peter, can you tell me if this can affect another user in another session, if you are for example using a select in the table where I will destroy the Indice?

  • Probably, but at the same time I think that when you are doing the update it takes, it also affects other users. The difference will be the time. You need to test. What is increasingly difficult when the system is used.

  • Tried to collect stats from this table after the massive Insert ?

  • @Can Motta tell me how I can do this check? Or any reference to how I can learn? Thanks

  • 1

    EXECUTE DBMS_STATS.GATHER_TABLE_STATS https://community.oracle.com/thread/639184

Show 8 more comments

1 answer

1


The solution I found, along with a co-worker, that worked improving performance, taking about 1 hour to process all the records. What before took about 28 hours.

The solution we tested and worked was to change SORT_AREA_SIZE session. Oracle’s default is 65536 bytes, according to the oracle documentation

We changed it to a value 10 times more, going to 655360 bytes.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.