IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

ABEND=SB37 U0000 REASON=00000004


IBM Mainframe Forums -> JCL & VSAM
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
shiitiizz

New User


Joined: 12 Sep 2013
Posts: 22
Location: India

PostPosted: Mon Sep 16, 2013 7:26 pm
Reply with quote

Hi, I am running a job which has 2 steps

Step1: Create a dataset
Step1: DD DSN=new.dataset(+1),
DISP=(NEW,CATLG,CATLG),
UNIT=DISK,SPACE=(CYL,(1,1),RLSE),
RECFM=FB,LRECL=85,BLKSIZE=25245

Step2: Write data in this newly created dataset. The file is being read in a COBOL/IMS Program and written.
The job is abending with SB37 in step 2.

After abending, AND, without altering the space/volume parameters, on simply re-starting the job from Step 2 the job is going fine.

My query is:
1) When the space was not increased, why didnt job fail again with SB37
2) This job runs for 4-5 hours, obviously having large I/O operations being performed. Would adding BUFNO in DCB help?

Since this job runs for huge time, I need some advise before editing the JCL to add BUFNO or not and before running the job.

Please let me know if I missed to provide any info.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8697
Location: Dubuque, Iowa, USA

PostPosted: Mon Sep 16, 2013 8:00 pm
Reply with quote

1. Without knowing what the program is doing, there is no way to answer this question. At a guess -- and this is only a guess -- the program did not attempt to write any more output to the file, or the program handles reruns in a different way, or the program resumed processing where it stopped before and hence did not have as many records to process, or .... In other words, there are many possible reasons why the program did not get an SB37 abend on the rerun, and asking why on this forum is a complete waste of time since we do not have the code, do not have any way to review the code, and do not have psychic abilities.

2. Adding BUFNO to the DCB should help -- but it depends on why the program runs so long. If the program is CPU-bound, then adding buffers will make NO difference in processing time; if the program is I/O-bound then adding buffers will help. However, you need to review the buffers for every file used by the program, not just the one data set you referenced. And depending upon the site, you may have to add memory to the job to get it to run with the additional buffers. If the program is running so long because WLM has it as discretionary work, then adding buffers may make no difference to how long the job runs as well.

If you are getting SB37 abends, why are you using SPACE=(CYL,(1,1)) instead of providing more space to the data set? If you edit the JCL to add buffers, go ahead and increase the space allocation while you are there.
Back to top
View user's profile Send private message
shiitiizz

New User


Joined: 12 Sep 2013
Posts: 22
Location: India

PostPosted: Mon Sep 16, 2013 8:12 pm
Reply with quote

Thanks for quick response Robert!

The program is an aged program, which scans an IMS database, segment by segment looking for a particular type of record. If it gets the record its looking for, program inserts a new segment under it and write the details to the file I referred earlier. There is only one file used in this case, which is getting written.
In Step 2, the DISP Parm of the dataset is MOD. The dataset abends when crossing 65,000 odd records but is easily capable of accomodating more than lakh records so increasing CYL space might work out but thats not the real issue I guess.

I wanted to understand that in first run when the dataset could not accomodate more records and failed SB37, how did it accomodate on re-run without space of dataset being altered.

Secondly, the program is I/O bound, so I would try putting BUFNO
during next run. Thanks for help here.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Sep 16, 2013 8:39 pm
Reply with quote

Hello,

Suggest you work with your dasd storage people. . .
They should be able to help identify the actual cause of the problem.

BUFNO may help with performance, but should have no impact on space required.
Back to top
View user's profile Send private message
shiitiizz

New User


Joined: 12 Sep 2013
Posts: 22
Location: India

PostPosted: Mon Sep 16, 2013 8:46 pm
Reply with quote

Ok Sure, Thanks for all the help!
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Tue Sep 17, 2013 3:42 am
Reply with quote

You job has lots of IO because of the wander through the database, Talk to you DBA(s) to see if anything can be done about that, or whether the program should be re-designed.

If you are writing 100,000 records,, find how many are in a block, then find how many blocks. That will give you an approximation of the physical IOs for the file. Compare those to the jobstep total IOs. If it is a tiny fraction, then BUFNO is only going to make a tiny change to the total nnumber of IOs.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8797
Location: Welsh Wales

PostPosted: Tue Sep 17, 2013 1:13 pm
Reply with quote

You may also want to remove the BLKSIZE= parameter.

By allowing the system to calculate and use the optimum blocksize, 27965, you will be able to store 15,360 more records in the same amount of space.
Back to top
View user's profile Send private message
madprasy

New User


Joined: 08 Apr 2008
Posts: 34
Location: Chennai

PostPosted: Thu Oct 17, 2013 7:30 pm
Reply with quote

Rough guess.. could be your program is having a file balancing logic..
So when it runs first time, it might have tried to insert many number of records and abended at last..
but, when you restart the job, as the file balancing is already completed till end.. there might be very few updates and job completed..

Just a rough guess...

As Expat suggested, you can try with skipping BLKSIZE or keep BLKSIZE=0
Back to top
View user's profile Send private message
Pete Wilson

Active Member


Joined: 31 Dec 2009
Posts: 581
Location: London

PostPosted: Thu Oct 17, 2013 8:21 pm
Reply with quote

If you know the file is going to have an expected number of records why not request SPACE to match that. CYL(1,1) is very small.

Specify the following for example:

SPACE=(lrecl,(pri,sec),RLSE),AVGREC=n

lrecl=Logical Record Length
n = a multiplier value or U,K,M (K is times 1024 for example)
Pri=number of expected primary records
Sec=expected growth in records


So for example, if you expect 100,000 x 80byte records:

SPACE=(80,(100,10),RLSE),AVGREC=K
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> JCL & VSAM

 


Similar Topics
Topic Forum Replies
No new posts Reorg abended with REASON=X'00E40347' DB2 2
No new posts ISAM and abend S03B JCL & VSAM 10
No new posts REASON 00D70014 in load utility DB2 6
No new posts Abend S0C4 11 (Page Translation Excep... PL/I & Assembler 16
No new posts WER999A - UNSUCCESSFUL SORT 8ED U Ab... SYNCSORT 5
Search our Forums:

Back to Top