Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
Your process has completely exausted all of the available intermediate storage (both hiperspace and dasd).
Are the sort work datasets in the jcl?
Note: DFSORT uses only the first volume of multi-volume work data sets.
The amount of intermediate storage required can vary depending on many factors including:
The amount of Hiperspace DFSORT is able to use at the time the sort is run
The amount of main storage available
The degree of randomness of the records to be sorted
The values specified (or defaulted) for options such as DYNALOC, DYNAUTO, DYNSPC, FILSZ/SIZE, AVGRLEN, OR DYNALLOC..
The amount of padding required for short records when VLSHRT is in effect.
Programmer Response: Take one or more of the following actions:
If appropriate, increase the amount of main storage available to DFSORT using the options MAINSIZE/SIZE or the JCL option REGION. Increasing the amount of main storage available to DFSORT can help DFSORT use less intermediate storage. Avoid running a large sort in a small amount of main storage.
If dynamic allocation was used, ensure that the values for the options DYNALOC, DYNAUTO, DYNSPC, DYNALLOC, FILSZ/SIZE, and AVGRLEN are appropriate. If necessary, specify these options or change their values.
If your average input record length is significantly different from one-half of the LRECL, specify AVGRLEN=n with a reasonably accurate estimate of the average record length.
If VLSHRT was in effect and the total size of all control fields was significantly larger than the average LRECL for the data set, you may be able to reduce the amount of work space required by reducing the total size of the control fields.
If JCL work data sets were used, increase the amount of work space available to DFSORT.
Possibly your storagem management people can allocate more dasd to the pool of dasd available for sort work.
The utility is passing FILSZ=17777776, but when DFSORT issues the ICE046A it has already processed 68184378 records. Because the file size being passed is so much smaller than the actual number of records processed, not enough work space was allocated. You need to determine why an incorrect file size is being passed. You may need to do a runstats against the table to get the catalog stats updated.