Joined: 14 Mar 2007 Posts: 8630 Location: Back in jolly old England
Where is the JCL for the sort step ?
Where is the SYSOUT from the sort step ?
What is the DP of the job ?
- Is it slow batch, normal batch, hot batch. (Terms in use at my current site - but have been used elsewhere). Something you will need to check with your performance & capacity people, or systems programmers if the yare responsible for this.
What else is running against the job ? - Other processes may be running against your job slowing it down, especially if they have a higher DP than your job.
Frank passed along your ouptuts so I could take a look at them. From a DFSORT tuning perspective, everything looks fine. You read in about 17.8 million records but only about 40 thousand are actually selected. So not much data is actually sorted and DFSORT was able to use Hiperspace for all of the intermediate storage. The input to the sort is coming from two tape data sets and I suspect the elapsed time is being gated by the speed of the tape drives. If the number of records selected is normally such a small percentage of the file, you might consider splitting this job as follows"
Job 1 reads tape input 1 using SORT FIELDS=COPY with your existing INCLUDE statement.
Job 2 runs concurrent to JOB 1 reading tape input 2. This also uses SORT FIELDS=COPY with your existing INCLUDE statement.
Job 3 sorts the small output files from Jobs 1 and 2 with your original sort fields - SORT FIELDS=(37,05,CH,A). This job can begin as soon as both JOB 1 and JOB 2 are complete.
This at least gets the reading of the two tape data sets to happen in parallel. We could have Jobs 1 and 2 sort the records and have Job 3 do a merge but since so few records are selected, I prefer Job 3 do the sort.