IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

giving the freedom to mainframe os for using the blcksize


IBM Mainframe Forums -> JCL & VSAM
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
raghavmcs

Active User


Joined: 14 Jul 2005
Posts: 105

PostPosted: Mon Mar 29, 2010 9:51 pm
Reply with quote

Dear Experts,

I see some handfull of jobs running in production with hard coded blocksize.I tested one such jcl with using the blcksize 0 and see a considerable CPU time saving in results.

I am wondering if there could be way for

a)Identifying all such jcl where blocksize is not eqaul to zero.Here is tried to first search using mainframe search in jcl library with blcksize=1 then blcksize=2 .... like wildcards and everytime I got some jcls.We do not have any specific position of this parameter in jcl library.I am wondering if there could be any better way of first collecting this information

b)Additionally I would like to learn the situation where coding the blksize with a specific value could really be advantageous ?
I mean trade off between when it could be beneficial of not giving the freedom to let system decide this?
Please provide your experts thougths
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Mon Mar 29, 2010 10:35 pm
Reply with quote

Why not just look for BLKSIZE=? Put the output in a data set, edit it, exclude all lines, find all BLKSIZE=0 and do a FLIP to find the non-zero block sizes.

There are some old programs that have hard-coded block sizes. If your site has any of these, changing the block size without changing the program will cause abends.
Back to top
View user's profile Send private message
raghavmcs

Active User


Joined: 14 Jul 2005
Posts: 105

PostPosted: Mon Mar 29, 2010 11:12 pm
Reply with quote

Thanks!!!Looks like I got the way to go
Back to top
View user's profile Send private message
raghavmcs

Active User


Joined: 14 Jul 2005
Posts: 105

PostPosted: Mon Mar 29, 2010 11:22 pm
Reply with quote

Just curious to know what could be the situation if any which could have made programmers to use blocksize hard coded in program?
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Mon Mar 29, 2010 11:41 pm
Reply with quote

If a COBOL programmer codes BLOCK CONTAINS ? RECORDS where ? is not zero, that is a hard-coded reference and such a reference cannot be overridden in JCL. The block size is fixed. Very old COBOL programs may still have this in them, although I would hope they've been updated by now. I don't remember which release of COBOL started supporting BLOCK CONTAINS 0 but the VS COBOL II manual (which can be reached from the manuals link at the top of the page) has BLOCK CONTAINS 0 as an IBM extension, not a COBOL standard.
Back to top
View user's profile Send private message
Terry Heinze

JCL Moderator


Joined: 14 Jul 2008
Posts: 1249
Location: Richfield, MN, USA

PostPosted: Tue Mar 30, 2010 6:26 am
Reply with quote

raghavmcs wrote:
Just curious to know what could be the situation if any which could have made programmers to use blocksize hard coded in program?
Many, many years ago, when memory was not as plentiful as today, programs with many files in them had to reduce the amount of buffer space for each file. Back then the file buffers were part of the load module, which isn't the case nowadays. I remember dealing with a program that had 10 files in it, and had to single buffer every file in order to stay under the load module size limit (COBOL '68).
Back to top
View user's profile Send private message
raghavmcs

Active User


Joined: 14 Jul 2005
Posts: 105

PostPosted: Tue Mar 30, 2010 7:30 pm
Reply with quote

Thanks,all this adds up my knowledge!!!
Back to top
View user's profile Send private message
Anuj Dhawan

Superior Member


Joined: 22 Apr 2006
Posts: 6250
Location: Mumbai, India

PostPosted: Tue Mar 30, 2010 8:35 pm
Reply with quote

Robert,

Per this comment,
raghavmcs wrote:
I see some handfull of jobs running in production with hard coded blocksize.I tested one such jcl with using the blcksize 0 and see a considerable CPU time saving in results.
I've two questions to ask:

1. What if the Hard-coded BLKSIZE and the BLKSIZE chosen by system are same, will CPU be affected?

2. And let's assume, BLKSIZE is less than what was coded, will CPU be affected?

I ran into couple of test but there are mixed results... icon_neutral.gif
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Tue Mar 30, 2010 8:55 pm
Reply with quote

Anuj:

1. No -- the same BLKSIZE should use the same CPU (with the usual caveats, of course).

2. Yes -- consider a data set with 80 byte records. Hard coded blocking is 80 bytes versus system selected 27920 bytes. To vastly oversimplify the process, the system can transfer 349 records with one EXCP (EXecute Channel Program) versus requiring 349 EXCP with the hard coded block size. Each EXCP does require some CPU so the hard coded BLKSIZE could use much more CPU time to transfer the same amount of data. Plus the system is having to monitor the transfers, keep track of the buffers, and in general doing a lot more work with the hard coded BLKSIZE.

Testing may show the difference, but you have to consider the load on the CPU, device and the channel as well -- a very busy channel may take as long to transfer 27920 bytes as a lightly loaded channel takes to transfer a few hundred or thousand bytes.
Back to top
View user's profile Send private message
Anuj Dhawan

Superior Member


Joined: 22 Apr 2006
Posts: 6250
Location: Mumbai, India

PostPosted: Tue Mar 30, 2010 9:22 pm
Reply with quote

Thank you, Robert. Your example really simple, makes me understand the things well icon_smile.gif

I'd like to run some more tests, as I also got some programs using this hard-coded BLKSIZEs. Let's see how does this old-wine taste to me, the new bottle...icon_biggrin.gif
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Tue Mar 30, 2010 11:25 pm
Reply with quote

Hi Anuj,

Be wary of old wine. . . Sometimes it becomes vinegar. . . icon_smile.gif

d
Back to top
View user's profile Send private message
Terry Heinze

JCL Moderator


Joined: 14 Jul 2008
Posts: 1249
Location: Richfield, MN, USA

PostPosted: Wed Mar 31, 2010 3:14 am
Reply with quote

Agree with what's been said, but I think a poor BLKSIZE will show up more drastically in elapsed time (because of the excessive EXCPs) than CPU time.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Wed Mar 31, 2010 3:52 am
Reply with quote

Definitely, Terry, there will a huge impact on elapsed time.
Back to top
View user's profile Send private message
CICS Guy

Senior Member


Joined: 18 Jul 2007
Posts: 2146
Location: At my coffee table

PostPosted: Wed Mar 31, 2010 7:32 am
Reply with quote

Terry Heinze wrote:
Many, many years ago, when memory was not as plentiful as today, programs with many files in them had to reduce the amount of buffer space for each file.
You got to give a bit to us old DOS/VSE vets where two buffers were the default and the only option was to limit it to one buffer....
Quote:
Back then the file buffers were part of the load module, which isn't the case nowadays.
Stretching my feeble memory, I recall that the buffers were allocated by the I/O mods, not actually part of them...The memory constraints were in the partition size that you had to run in...
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Wed Mar 31, 2010 8:09 am
Reply with quote

Quote:
he memory constraints were in the partition size that you had to run in...
I remember in school we had a 360/30 with 64K of memory and ran three partitions -- 12K, 20K, 32K and got a lot of work done on that machine! The days of linkage editor overlays and mapping memory to make sure things wouldn't try to use invalid memory addresses ....
Back to top
View user's profile Send private message
DB2 Guy

New User


Joined: 28 Oct 2008
Posts: 98
Location: Cubicle

PostPosted: Tue Apr 06, 2010 4:52 pm
Reply with quote

Robert Sample wrote:
If a COBOL programmer codes BLOCK CONTAINS ? RECORDS where ? is not zero, that is a hard-coded reference and such a reference cannot be overridden in JCL.
Robert - what if the JCLs itself have hard-coded references...such as
Code:
DCB=(RECFM=VB,LRECL=260,BLKSIZE=6160)
- changing them to BLKSIZE=0 would help in elpased/CPU times?
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Tue Apr 06, 2010 5:19 pm
Reply with quote

From the JCL Language Reference manual:
Quote:
12.16.3 Completing the Data Control Block

The system obtains data control block information from the following sources, in override order:

* The processing program, that is, the DCB macro instruction in assembler language programs or file definition statements or language-defined defaults in programs in other languages.

* The DCB subparameter of the DD statement.

* The data set label.
If the program has hard-coded BLOCK CONTAINS 11 RECORDS, you can put anything you want in the JCL and it will not be honored.

If the program does not have hard-coded references, and the JCL currently does, you can typically gain some efficiency by changing to BLKSIZE=0. While you're changing the JCL, adding buffers will also help processing. How much help will depend upon the program's processing but I have seen 80 to 90% improvements in elapsed time and 40 to 80% drops in CPU time by increasing block size and using enough buffers (rather than the default of 5).
Back to top
View user's profile Send private message
agkshirsagar

Active Member


Joined: 27 Feb 2007
Posts: 691
Location: Earth

PostPosted: Wed Apr 07, 2010 12:02 am
Reply with quote

First of all, thanks to all the experts for an interesting thread.

I remember reading from an old thread in this forum itself that the BLKSIZE=0 will work only for the SMS managed datasets. For NON-SMS managed datasets we still need to code the blocksize value.
(I am not sure if this still holds true. Any comments?)

Adding my 2c.
Most efficient block size for a dataset is dependent on 2 factors
i. Device type
ii. Record length
Which means with more efficient storage devices, the optimum blocksize will change. If a shop decides to upgrade their storage device to a more efficient one, then the hardcoded block size will prevent existing datasets from getting the benefits of new device. So better to code BLKSIZE=0 always even if you know the calculations very well.

Another factor which can be discussed for an efficient data access is the number of BUFFERS allocated. I have always been lazy about paying attention to this parameter (Maybe because I am hoping that IBM may come up with enhancements in SMS which will get most efficient number of buffers allocated dynamically.) But I know that along with the BLKSIZE, it is very important parameter for efficient dataset access.

I've heard that SORT products calculate the optimum buffer requirements for a datasets automatically and get them allocated. Is this understanding correct?
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Wed Apr 07, 2010 12:44 am
Reply with quote

From the DF/SMS Using Data Sets manual:
Quote:
3.2.3.1.2 System-Determined Block Size

If you do not specify a block size for the creation of a data set, the system attempts to determine the block size. Using a system-determined block size has the following benefits:

* The program can write to DASD, tape, or SYSOUT without you or the program calculating the optimal block size. DASD track capacity calculations are complicated. Optimal block sizes differ for various models of DASD and tape.

* If the data set later is moved to a different DASD type, such as by DFSMShsm, the system recalculates an appropriate block size and reblocks the data.

The system determines the block size for a data set as follows:

1. OPEN calculates a block size.

Note: A block size may be determined during initial allocation of a DASD data set. OPEN will either use that block size or calculate a new block size if any of the data set characteristics (LRECL,RECFM) were changed from the values specified during initial allocation.

2. OPEN compares the calculated block size to a block size limit, which affects only data sets on tape because the minimum value of the limit is 32 760.

3. OPEN attempts to decrease the calculated block size to be less than or equal to the limit.

The block size limit is the first nonzero value from the following items:

1. BLKSZLIM value in the DD statement or dynamic allocation.

2. Block size limit in the data class. The SMS data class ACS routine can assign a data class to the data set. You can request a data class name with the DATACLAS keyword in the DD statement or the dynamic-allocation equivalent. The data set does not have to be SMS managed.

3. TAPEBLKSZLIM value in the DEVSUPxx member of SYS1.PARMLIB. A system programmer sets this value, which is in the data facilities area (DFA) (see z/OS DFSMSdfp Advanced Services).

4. The minimum block-size limit, 32 760.

Your program can obtain the BLKSZLIM value that is in effect by issuing the RDJFCB macro with the X'13' code (see z/OS DFSMSdfp Advanced Services).

Because larger blocks generally cause data transfer to be faster, why would you want to limit it? Some possible reasons follow:

* A user will take the tape to an operating system or older z/OS system or application program that does not support the large size that you want. The other operating system might be a backup system that is used only for disaster recovery. An OS/390® system before Version 2 Release 10 does not support the large block interface that is needed for blocks longer than 32 760.

* You want to copy the tape to a different type of tape or to DASD without reblocking it, and the maximum block size for the destination is less than you want. An example is the IBM 3480 Magnetic Tape Subsystem, whose maximum block size is 65 535. The optimal block size for an IBM 3590 is 224 KB or 256 KB, depending on the level of the hardware. To copy from an optimized 3590 to a 3480 or 3490, you must reblock the data.

* A program that reads or writes the data set and runs in 24-bit addressing mode might not have enough buffer space for very large blocks.
Most recent devices are using 3390 geometry which makes half-track blocking (27998) most efficient, generally.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> JCL & VSAM

 


Similar Topics
Topic Forum Replies
No new posts FTP VB File from Mainframe retaining ... JCL & VSAM 1
No new posts Mainframe openings in Techmahnidra fo... Mainframe Jobs 0
No new posts Mainframe Programmer with CICS Skill... Mainframe Jobs 0
No new posts How to Reformat a file using File Man... All Other Mainframe Topics 14
No new posts NDM getting stuck - mainframe/JCL All Other Mainframe Topics 13
Search our Forums:

Back to Top