View previous topic :: View next topic
|
Author |
Message |
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 41 Location: Melbourne, Australia
|
|
|
|
I've been having a bit of a look around to see if there's any real performance gain to be had by coding CONTIG for either larger files or SORT working files. And I haven't had much luck. 20 or 30 years ago, there might have been a need as the DASD was slower and you needed to use it a lot more than you do now.
Does anyone know of any write-ups of this that have been done recently (last 10 years)? |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10872 Location: italy
|
|
|
|
nothing to gain from a physical point of view,
since the monolithic real drives have been substituted by dasd arrays and an emulation layer for 3390 and friends
all the concerns about seek times times and rotational delays are no longer there
( the tracks of an emulated devices will certainly be spread around on different real drives )
add also the intensive use of caches
there will be be a slight software overhead for allocation
and for the mapping of the <logical> track info to the <real> track/extent info |
|
Back to top |
|
|
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 41 Location: Melbourne, Australia
|
|
|
|
Thanks Enrico.
I figured as much, but it's been a while since I looked at this sort of thing. And I also figured that any sort of test would probably show up my lack of skill in designing the test - than any major difference. With the volumes I can see being used, I could not see any need for it anyway (even in the old days).
I stopped coding CONTIG about the same time that we got our first SS drive (which must have been in the early 1990's). Everything started to become so much easier that I figured I'd be out of a job in the next few years (how wrong was I?). |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
I don't think performance is the issue any more -- but RACF, for example, has CONTIG coded in the sample JCL member RACJCL because the RACF database must be allocated in one extent. |
|
Back to top |
|
|
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 41 Location: Melbourne, Australia
|
|
|
|
Interesting. For any other database I might question the algorithm used, but I can see the point for a RACF/ACF2/other security database.
I should have explained that the CONTIGs I was seeing were for sort work files - and I just could not see the point of having them.
I wonder if there's any other places where a CONTIG would be appropriate (moving the thread slightly off-topic)... |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10872 Location: italy
|
|
|
|
Quote: |
I wonder if there's any other places where a CONTIG would be appropriate |
normally it is the program using the dataset that might have the need for CONTIG
and the doc usually tells
IIRC contig was/is used when trying to squeeze the last drop of I/O dasd performance
using highly optimized ccw chains
I am sure that Robert can tell more about it |
|
Back to top |
|
|
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
Robert Sample wrote: |
I don't think performance is the issue any more -- but RACF, for example, has CONTIG coded in the sample JCL member RACJCL because the RACF database must be allocated in one extent. |
There are a few other places where this is true. For example, the JES2 checkpoint and SPOOL data sets must be CONTIG. |
|
Back to top |
|
|
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
enrico-sorichetti wrote: |
IIRC contig was/is used when trying to squeeze the last drop of I/O dasd performance
using highly optimized ccw chains |
I'm not so sure about performance here. You might get a tiny reduction in CPU usage by cheating on some device dependent calculations, but I can't imagine any real I/O improvement unless you are also allocating in cylinders and you are using "multi-track" CCW read commands |
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 580 Location: London
|
|
|
|
IBM recommends using DSNTYPE=LARGE for SORTWK's I believe. (do not specify a compression Dataclas as it's not supported for SORTWK's)
For enhanced performance on really large QSAM or VSAM datasets one option is to using Data Striping, but you need to have appropriate Storage Classes and Dataclases available for that. But in reality it is only really extreme cases where you might want to use this these days. You'd be best to talk to your local Storage Admin people to ascertain if this is desirable or feasible. |
|
Back to top |
|
|
|