I am facing performance issue in a cobol db2 prog.The table is a partioned table built with surrogate keys and holds data of 18 million records per month and the prg access the data for each month .
My job runs for almost 6-8 hours to fetch those records.
when looked at the check point stats it shows approx less than 20000 record fetched per 3 minutes
I am using a simple sql query selecting all the rows and in where clause I am using the surrogate keys coming from input file as host variables.
all the keys are indexed keys.
Note: The predecessor job does heavy updates and insertions over the table ,due this is there a chance of lowering performance?
Please let me know your suggestions on how to reduce the run time and CPU time of the job
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
How much cpu does the problem process require?
How many rows in the table match the most restrictive key value? It may be faster to unload the rows using the key that most limits the "found set" and then filter out the other data that is not needed from this extract using the sort. . .
I presume that you are fetching the rows in your job using cursor with hold option and as you said the columns used in the where clause are part of index.
Now as the result-set of the cursor is huge i "suspect" while opening the cursor it is taking too much of time. One way to reduce the this time is by limiting the result-set of cursor by using some hardcoded values of partitioning keys. Once the work for one set of combination is over take checkpoint and then re-open the cursor with another set of partitioning keys. This solution is applicable if you know the possible values of partitioning keys.