Joined: 06 Jun 2008 Posts: 8344 Location: Dubuque, Iowa, USA
1. BUFNO determines the number of buffers to assign to a particular DD name (data set). For QSAM (sequential) files, the default assignment is 5 buffers. When you execute a COBOL READ statement, the data is fetched from the data set (disk or tape or whatever) into a buffer and that buffer's data is what is actually available to COBOL. Note that a buffer is the size of a block, not the size of a record so the entire block is read into the buffer and one record at a time made available to your program. You can override the default BUFNO assignment via JCL parameter.
2. The record length doesn't matter for buffering. The block size is what determines the buffer size. Determining the optimal number of buffers usually involves your site support group as they may be aware of site requirements that could impact the value. I usually see good performance improvements up to one cylinder (30 27000 byte buffers) of buffer space; depending upon the machine and dasd speed more may be helpful but only testing will tell you for sure.
3. I'll let one of the SYNCSORT experts answer this.
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
Usually, more improvement will be seen by using a better BLKSIZE rather than changing BUFNO. . .
At the top of the page is a link to "IBM Manuals" among which are the JCL Reference for multiple versions of the Operating System. Using the flashlight/tubelight at the top left, search for BUFNO.
1. Buffers are memory used to hold data after it is read from the media and before it is used by the program code. This is automatic and is outside control of the program.
2. I'm not aware of a particular calculation.
3. Suggest you ask Syncsort support if you cannot find what you need in the Syncsort documentation.
I have been looking for some script that would give me an indept knowledge about this simple parameter..(BUFNO) i came accross the following :
If you really need to do I/O then at least do it as neatly as possible.
1) Use of BUFNO for sequential Datasets:
When dealing with sequential DASD datasets, MVS does some performance tuning behind your back. Instead of dealing in 1 block chunks it now deals in 5 block chunks. What this means is that for the following blocked datasets you actually deal with larger chunks of data.
22000 110,000 bytes gotten per access
6160 30800 bytes gotten per access
Thus when reading a 22,000 byte blocked dataset, MVS really works with 110,800 bytes chucks by default. The objective being met here is to minimize the amount of requests made to MVS's I/O subsystem, an expensive option.
But is 5 buffers enough? In today's environment, the answer is simple => NO. You should consider coding more. Another factor that comes into play at this time is if you exceed multiples of 31 buffers or 249,856 total bytes, then MVS will break your request into 2 or more parts and do Parallel I/O thus reducing the amount of time required to run I/O bound jobs. For example a job that read 100,000 240 byte records blocked at 24,000 took 8 seconds to run, but with 33 buffers it took only 6.5 seconds. If this job did a lot of process between request for data records the response time would decrease more!
You must also recognize that by increasing the BUFNO your job now require extra MEMORY, which can have an impact on your run time. In fact coding too many buffer can slow things down. Never code BUFNO < 5 unless you really understand the implications, and finally do not over code BUFNO, if you dataset is only 100K in size, don't code 200K of buffer space.
Also don't bother with this parm for SORTIN, SORTWK, or SORTOUT (sort datasets). Sort does its own special I/O processing to reduce EXCP and coding BUFNO will only confuse it.
2) Better Blocking of datasets.
Yes 6160 is another magic number, recommended 10 odd YEARS ago. It no longer as any value other than a magic number. Try and get as close as possible to 22,340 for DASD datasets. This rule does not apply to TAPE where 32767 block area always optimal. As a side point watch blocksizes when you concatenate datasets.
Can any one tell me the numbers mentioned here as "MAGICAL" are really correct and will work out almost all the time??
Try and get as close as possible to 22,340 for DASD datasets.
where did You find that number ? ( throw away that manual anyway)
i would trust more SMS judgement about half track blocking...
for the most common dasd architecture the optimum blksize is near the half track size 27998
obviously a trade off must be made sometimes also on dasd utilization
Joined: 27 Apr 2005 Posts: 275 Location: Cincinnati OH USA
and today's DASD and virtual tape devices have controllers monitoring behavior of disk accesses and use their own controller cache to anticipate next reads and delayed writes to expedite/lessen time spent getting data to/from their actual physical devices.