We migrate to new disks for our dfsort work's file allocation. Now we are on IBM DS8300. I have the possibility to allocate my work files on a emulation of 3390/9 or 3390/27.
I will not have more adress for the disk (PAV) but will using Hiper PAV.
I am with a zOS 1.8 and Dfsort 1.5
Have anybody experience with the effect of disk capacity increase on Dfsort performance and give me some information ?
It's difficult to predict performance without knowing much about your workload or the characteristics of your DFSORT jobs. But I can tell you that we've seen many of our customers using MOD 27 and MOD 54 for sortwork data sets and getting good performance. If you've been running with large numbers of sortwork data sets to obtain space across many MOD 3s, you may now be able to run with fewer/larger sortwork data sets. DFSORT's dynamic allocation will allocate the sortwork data sets as large format so they can take advantage of the larger volume capacity.
We have a mixed type of load for the dfsort, some job haven't enough with 255 work files (mainly sas jobs) and other don't use work file. My problem is to have users using dynamic allocation and also not coding to much works files (with dynaloc=(sysda,xxx)).
Correct me if i am wrong, my limit now for work file will be the 72K 3390 tracks. I can reduce the number of workfiles by default in order to reduce total disk space used but i have to provide a hyper pav pool big enough to provide enough disk access path.
I'm not sure I understand why you think your current limit for a work file is 72K 3390 tracks. Since you are at z/OS 1.8, the sortworks are being allocated as large format. Therefore the limit is the size of the volume on which the work data set is allocated:
If 3390-3, then limit for each sortwork is about 50K tracks
If 3390-9, then limit for each sortwork is about 150K tracks
If 3390-27 then limit for each sortwork is about 480K tracks
Of course these limits assume the volumes have 100% free space which is generally not the case.
If you reduce the default number of dynalloc, you will not be reducing the space used. Sort will still use the same total work space, it will just be spread across fewer/larger work data sets. For your very large sorts that you say currently 255 sortworks is not enough, you may be able to complete these sorts with much less than 255 sortworks if they are on volumes with much more free space.