View previous topic :: View next topic
|
Author |
Message |
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
Hi all,
Yesterday i wrote a job in production.
//S010 EXEC PGM=PGM1
//SYSUDUMP DD SYSOUT=A,HOLD=YES
//SYSPRINT DD SYSOUT=A
//SNAPDUMP DD SYSOUT=A,HOLD=YES
//SYSOUT DD SYSOUT=A
//SYSI01 DD DSN=HE2.XXXX.XXXXX.TAPEG02,DISP=(OLD,KEEP)
//SYSO01 DD DSN=&&TEMP,DISP=(NEW,PASS),UNIT=DISK,
// SPACE=(CYL,(200,50),RLSE),
// DCB=(RECFM=VB,LRECL=5996,BLKSIZE=27648)
PGM1 is just moving all the records from Input Tape ds to a temp dataset.
This program took more than 1 hour to execute.
Now whenever i added one statement to it(as given below),
//S010 EXEC PGM=PGM1
//SYSUDUMP DD SYSOUT=A,HOLD=YES
//SYSPRINT DD SYSOUT=A
//SNAPDUMP DD SYSOUT=A,HOLD=YES
//SYSOUT DD SYSOUT=A
//SYSI01 DD DSN=HE2.XXXX.XXXXX.TAPEG02,DISP=(OLD,KEEP)
// DCB=(RECFM=VB,LRECL=5996,BLKSIZE=27648) <- Added Stmt
//SYSO01 DD DSN=&&TEMP,DISP=(NEW,PASS),UNIT=DISK,
// SPACE=(CYL,(200,50),RLSE),
// DCB=(RECFM=VB,LRECL=5996,BLKSIZE=27648)
It took only 12 minutes!!!!
The Label parm for the tape was LABEL=(1,SL).
Can someone explain in detail why the difference in time arouse?Was it due to blocksize parameter? |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
Why do you think it is the BLKSIZE parameter ?
The tape was written with the blocksize that it was written with, specifying something else in the JCL will not change that.
What else was running against this run and the previous run. Exactly the same processes ? |
|
Back to top |
|
|
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
Thanks for the quick reply...
My initial thought was for NL tapes,we need to specify the DCB parameter.But when i checked the Job which creates the tape,it was created with standard label.
Why i thought Blocksize was this:- The amount of records read in a single block matters.
The number of input records exceeded 2 crores.
Yes,The processes was exactly the same.I only included the DCB parameter in the second run..
Please throw some light on this. |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10873 Location: italy
|
|
|
|
Check the cpu time
there are too many other things which could influence the elapsed
Maybe there was a mount pending and the operator had gone for a coffee |
|
Back to top |
|
|
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
I started from there.
For the modified jcl:
STEPNAME PROCSTEP RC EXCP CONN TCB SRB CLOCK SERV PG
S010 00 4544K 713K .45 .20 19.2 3064K 0
S020 00 4368K 807K 1.28 .20 19.7 3976K 0
S030 00 193K 835K .56 .13 16.5 942K 0
S040 00 6786 178K .15 .00 1.8 196K 0
TOTAL TCB CPU TIME= 2.45 TOTAL ELAPSED TIME= 57.3
And for the first jcl
TOTAL TCB CPU TIME= .86 TOTAL ELAPSED TIME= 11.2
Now since the elapsed time was higher( time spent in processing of the job including the DB & file activities),i just added dcb parameters to the modified jcl(Just by trial).But i dont know why the 2 nd jcl ran with very less time???? |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
There are so many contributing factors to affect the run time of a job, the amount of other processing being performed by the system, and also the type.
The DB processes may have been slow because there were many users at the time that your job ran, and very few when you reran.
Maybe there was a "hot batch" job running against your first run, who knows. |
|
Back to top |
|
|
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
That was possible...But in the programs,there was not even a single DB used.All i used was a tape datasets containing 2 crore records and VSAM file containing 1 crore records.
Also at that time,only my JCL was using the Input datasets.
I reran several time the original and modified jcls to see whether it was really due to the factors you have mentioned.
Original jcl was still taking 1 hr and modified jcl is taking 12 minutes!!!
How can by just coding DCB parameter bring down the CPU time and Elapsed time by 80%??? |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10873 Location: italy
|
|
|
|
Coding a DCB-BLKSIZE parameter in the jcl for standard label input files
has the only effect of allocating a larger buffer for reading,
the reads will still be done accordingly to the label information
Until a few releases ago You would get into trouble if You concatenated dataset with different dcb attributes ( BLKSIZE )
data management would allocate buffers according to the BLKSIZE of the first dataset
so when reading the second one with a larger BLKSIZE You would get an abend
coding a large BLKSIZE in the first dd would overcame the problem |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10873 Location: italy
|
|
|
|
Quote: |
That was possible...But in the programs,there was not even a single DB used.All i used was a tape datasets containing 2 crore records and VSAM file containing 1 crore records.
Also at that time,only my JCL was using the Input datasets. |
I do not see any vsam reference in Your original/source JCL
just the allocation of a new sequential dataset |
|
Back to top |
|
|
Bill Dennis
Active Member
Joined: 17 Aug 2007 Posts: 562 Location: Iowa, USA
|
|
|
|
As enrico said, you cannot draw any conclusion based on elapsed time.
That amount of difference in TCB time would make me think that either:
1) a different file was read in, or
2) the program(s) processed differently.
Time spent on I/O for too small blocksize would appear in the SRB (system) total, not TCB (task). Also, adding a larger input blksize than the tape was created at should have no effect. The channel will still only move a physical block at a time.
In summary, these cannot be identical runs. |
|
Back to top |
|
|
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
Ok...I checked the SRB time
Old JCL Modified JCL
-------- -------------
S010 .20 .01
S020 .20 .03
Do you say that,SRB time was the main factor for this???? |
|
Back to top |
|
|
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
nartcr wrote: |
Ok...I checked the SRB time
S010 .20(Old JCL) .01(Mod JCL)
S020 .20(Old JCL) .03(Mod JCL)
S030 .0(Old JCL) .0(Mod JCL)
S040 .0 (Old JCL) .0(Mod JCL)
Do you say that,SRB time was the main factor for this???? |
|
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10873 Location: italy
|
|
|
|
I would say that PGM1 acts funny
the original blocksize of the tape dataset is something else, certainly smaller
there is something that forces for the output datset the blocksize ( implied or specified )
of the input dataset regardless of the dcb info of the output
It would be interesting to see the excp count for the two different cases
for what reason otherwise the second step should suffer of the same disease
please post the full sysout for both cases and use the code tag , it makes things easier to read |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Specifying the dcb for the input tape should not change the length of the run or the amount of cycles used.
As a simple way to verify, you might set up an iebgener or sort to copy the input tape to a dummy output and run it twice - once with the dcb in the jcl and once without it.
I would expect that both runs would use quite similar system resources. Depending on the load on the system, the elapsed times might vary. |
|
Back to top |
|
|
Bill Dennis
Active Member
Joined: 17 Aug 2007 Posts: 562 Location: Iowa, USA
|
|
|
|
nartcr wrote: |
Ok...I checked the SRB time
Old JCL Modified JCL
-------- -------------
S010 .20 .01
S020 .20 .03
Do you say that,SRB time was the main factor for this???? |
On your earlier post, the modified JCL job had the SRB time of .20. Have you turned them around??? Check again.
I ran a test myself and was suprised by the result. When I specified a large BLKSIZE of 32000 for a tape created at 8000, the SRB time went up from .02 to .20!
Short answer is that specifying a larger INPUT blksize doesn't help and might cause more SRB time but it's so little as to be meaningless. |
|
Back to top |
|
|
nartcr
New User
Joined: 06 Jun 2007 Posts: 83 Location: anada
|
|
|
|
Ok...i will let you know on monday... |
|
Back to top |
|
|
Douglas Wilder
Active User
Joined: 28 Nov 2006 Posts: 305 Location: Deerfield IL
|
|
|
|
I would not test this with sort. Sort seems to have its own way to read and write files, that other programs cannot match. I wonder how they do that.
Could it be that giving a much larger than needed block size for a huge file, might somehow be converted into more buffers being used and that give increased efficiency? |
|
Back to top |
|
|
|