View previous topic :: View next topic
|
Author |
Message |
jayreddy
New User
Joined: 07 Apr 2008 Posts: 10 Location: Mumbai
|
|
|
|
Hi,
I am facing performance issue in a cobol db2 prog.The table is a partioned table built with surrogate keys and holds data of 18 million records per month and the prg access the data for each month .
My job runs for almost 6-8 hours to fetch those records.
when looked at the check point stats it shows approx less than 20000 record fetched per 3 minutes
I am using a simple sql query selecting all the rows and in where clause I am using the surrogate keys coming from input file as host variables.
all the keys are indexed keys.
Note: The predecessor job does heavy updates and insertions over the table ,due this is there a chance of lowering performance?
Please let me know your suggestions on how to reduce the run time and CPU time of the job
Regards
Jayreddy |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
gee, with all the information you have given,
i would start with - flip a coin.
what are the results of explain?
have you tried reorg before you job and after the predecessor job?
try unload and then process the qsam file and see the difference in performance. |
|
Back to top |
|
|
GuyC
Senior Member
Joined: 11 Aug 2009 Posts: 1281 Location: Belgium
|
|
|
|
What's the partitinoning key ? month ?
What's the clustering key ? (how well clustered is it after update/insert job, you probably need a runstats to check this)
Are the input records in some order ? |
|
Back to top |
|
|
jayreddy
New User
Joined: 07 Apr 2008 Posts: 10 Location: Mumbai
|
|
|
|
Thanks for replies i need some time to come back to your quesitons.
mean while could you please suggest me some better ways of reducing CPU time of the Job .
Is it ok to run the REORG job ,before i start running my job does it give me any advanatage or REORG JOB too run f or longer time on the huge table
Please let me know
Regards
JAYREDDY |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
How much cpu does the problem process require?
How many rows in the table match the most restrictive key value? It may be faster to unload the rows using the key that most limits the "found set" and then filter out the other data that is not needed from this extract using the sort. . .
How long does it take to run a reorg? |
|
Back to top |
|
|
chandan.inst
Active User
Joined: 03 Nov 2005 Posts: 275 Location: Mumbai
|
|
|
|
Hi,
Are you using single fetch or multi fetch?
If you are using Single fetch try to run your program using ROWSET positioning for 500 or 1000 rows.
Somewhere you need to tweak your program logic but I hope it will help you to reduce CPU time
regards,
Chandan |
|
Back to top |
|
|
mlp
New User
Joined: 23 Sep 2005 Posts: 91
|
|
|
|
I presume that you are fetching the rows in your job using cursor with hold option and as you said the columns used in the where clause are part of index.
Now as the result-set of the cursor is huge i "suspect" while opening the cursor it is taking too much of time. One way to reduce the this time is by limiting the result-set of cursor by using some hardcoded values of partitioning keys. Once the work for one set of combination is over take checkpoint and then re-open the cursor with another set of partitioning keys. This solution is applicable if you know the possible values of partitioning keys. |
|
Back to top |
|
|
|