Step1: Create a dataset
Step1: DD DSN=new.dataset(+1),
Step2: Write data in this newly created dataset. The file is being read in a COBOL/IMS Program and written.
The job is abending with SB37 in step 2.
After abending, AND, without altering the space/volume parameters, on simply re-starting the job from Step 2 the job is going fine.
My query is:
1) When the space was not increased, why didnt job fail again with SB37
2) This job runs for 4-5 hours, obviously having large I/O operations being performed. Would adding BUFNO in DCB help?
Since this job runs for huge time, I need some advise before editing the JCL to add BUFNO or not and before running the job.
Please let me know if I missed to provide any info.
Joined: 06 Jun 2008 Posts: 8561 Location: Dubuque, Iowa, USA
1. Without knowing what the program is doing, there is no way to answer this question. At a guess -- and this is only a guess -- the program did not attempt to write any more output to the file, or the program handles reruns in a different way, or the program resumed processing where it stopped before and hence did not have as many records to process, or .... In other words, there are many possible reasons why the program did not get an SB37 abend on the rerun, and asking why on this forum is a complete waste of time since we do not have the code, do not have any way to review the code, and do not have psychic abilities.
2. Adding BUFNO to the DCB should help -- but it depends on why the program runs so long. If the program is CPU-bound, then adding buffers will make NO difference in processing time; if the program is I/O-bound then adding buffers will help. However, you need to review the buffers for every file used by the program, not just the one data set you referenced. And depending upon the site, you may have to add memory to the job to get it to run with the additional buffers. If the program is running so long because WLM has it as discretionary work, then adding buffers may make no difference to how long the job runs as well.
If you are getting SB37 abends, why are you using SPACE=(CYL,(1,1)) instead of providing more space to the data set? If you edit the JCL to add buffers, go ahead and increase the space allocation while you are there.
The program is an aged program, which scans an IMS database, segment by segment looking for a particular type of record. If it gets the record its looking for, program inserts a new segment under it and write the details to the file I referred earlier. There is only one file used in this case, which is getting written.
In Step 2, the DISP Parm of the dataset is MOD. The dataset abends when crossing 65,000 odd records but is easily capable of accomodating more than lakh records so increasing CYL space might work out but thats not the real issue I guess.
I wanted to understand that in first run when the dataset could not accomodate more records and failed SB37, how did it accomodate on re-run without space of dataset being altered.
Secondly, the program is I/O bound, so I would try putting BUFNO
during next run. Thanks for help here.
Joined: 09 Mar 2011 Posts: 7312 Location: Inside the Matrix
You job has lots of IO because of the wander through the database, Talk to you DBA(s) to see if anything can be done about that, or whether the program should be re-designed.
If you are writing 100,000 records,, find how many are in a block, then find how many blocks. That will give you an approximation of the physical IOs for the file. Compare those to the jobstep total IOs. If it is a tiny fraction, then BUFNO is only going to make a tiny change to the total nnumber of IOs.
Rough guess.. could be your program is having a file balancing logic..
So when it runs first time, it might have tried to insert many number of records and abended at last..
but, when you restart the job, as the file balancing is already completed till end.. there might be very few updates and job completed..
Just a rough guess...
As Expat suggested, you can try with skipping BLKSIZE or keep BLKSIZE=0