Shruthi Y S
New User
Joined: 25 Mar 2015 Posts: 2 Location: India
|
|
|
|
Hi,
We have a scenario, where we have to read some 800000 records from a table, and for each record delete the information related to it from around 150 tables.
As of now in our COBOL-DB2 Program, we are using a cursor to read those 800000 records, and after every fetch we are doing the processing for that record. But this is taking forever to run(few hours), and not getting completed.
So my question here is: Instead of cursor, what if we directly join the table from which we have to delete information, with the table that has 800000 records, and delete all matching information, And repeat this for rest 149 tables? Will this help us with the performance in any way?
Note: We tried to analyse which table access is taking long time and tried to run without those few time taking table accesses. We also verified if all indexes are in place. But it is the amount of data which is the problematic point for now. |
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
|
|
800k is not too many rows anyways . Lower than production environments always have problems as they are exactly not identical with prod ( it’s not possible) . Talk to DBA at your site.
I can suggest these options .
1.Try Multi row fetch and delete
2. Do multi row fetch and insert into GTT. Use this GTT ( with keys) as join to all other tables and delete them.
3. You can do load replace to all these tables bu excluding the rows from main cursor, this is too much of work . |
|