View previous topic :: View next topic
|
Author |
Message |
fahir83
New User
Joined: 27 Jan 2006 Posts: 22
|
|
|
|
I have a program which reads DB2 table and writes into a flat file.While writing it came across EOF, but due to lack of File exception handling, it writes to the external memory and some of the data got lossed. We thought that the error is due to lack of File Exception handling, we put the condition to check the Fiile status after each write.
Now the problem is, it again abended with code S0C4,reason code =0004.protection exception (IEC030I B37-04) we thought that it again due to space issue, we write the remaining rows in the another file. But still some of the data is lost.
Can i know what is the actual cause happened? Why the data is lost? (If the data is written to some external memory an i know why it happened)
Thanks in advance |
|
Back to top |
|
|
UmeySan
Active Member
Joined: 22 Aug 2006 Posts: 771 Location: Germany
|
|
|
|
Hi !
Look here: B37, as you wrote !!!
Space error. Give your output-file more cyls for more space.
That's all.
Regards, UmeySan |
|
Back to top |
|
|
muthuvel
Active User
Joined: 29 Nov 2005 Posts: 217 Location: Canada
|
|
|
|
Some more points to what Umey has said,
SD37 - no secondary allocation was specified.
SB37 - end of vol. and no further volumes specified.
SE37 - Max. Of 16 extents already allocated. |
|
Back to top |
|
|
William Thompson
Global Moderator
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
|
|
|
|
Quote: |
While writing it came across EOF |
EOF while writing?
Quote: |
S0C4,reason code =0004 |
The key of the storage area that the running program tries to access is different from that of the running program.
It doesn't sound like a space problem.....
Alloc more volumes or larger allocations. |
|
Back to top |
|
|
UmeySan
Active Member
Joined: 22 Aug 2006 Posts: 771 Location: Germany
|
|
|
|
Hi again !
B37 is the actuator of the abend. S0C4 is only another result of trying
to write to outputfile.
So, more space must be defined !!!
Regards, UmeySan
See SB37-04 in detail:
04 During end-of-volume processing, one of the following occurred:
For an output data set, all space was used on the current volume and no more volumes were specified.
For an output data set on a direct access device, the system
might have needed to demount the volume for one of the following
reasons:
o No more space was available on the volume.
o The data set already had 16 extents, but required more
space.
o More space was required, but the volume table of contents
(VTOC) was full. If additional space were allocated,
another data set control block (DSCB) might have been
needed, but could not have been written. |
|
Back to top |
|
|
fahir83
New User
Joined: 27 Jan 2006 Posts: 22
|
|
|
|
Hi
Its ok we stored the remaining records in new output file,But the actual problem is the data got lost (some of the records missing between first record of second output file and the last record of first output file).
To precise my question, why the data is written to some external memory since i actually checked for EOF condition without showing SB37 and came out of the program where it reaches EOF condition |
|
Back to top |
|
|
Bitneuker
CICS Moderator
Joined: 07 Nov 2005 Posts: 1104 Location: The Netherlands at Hole 19
|
|
|
|
Before actually writing to an output device your records are buffered. If your program fails there might not be actually written. In your dump you may find some information: previous/current record. Or is your problem of a different kind? |
|
Back to top |
|
|
UmeySan
Active Member
Joined: 22 Aug 2006 Posts: 771 Location: Germany
|
|
|
|
Hi !
Why not defining one output file big enough an re-run the job to get them all in one file ???
Regards & nice Weekend
UmeySan |
|
Back to top |
|
|
fahir83
New User
Joined: 27 Jan 2006 Posts: 22
|
|
|
|
Hi thompson
Thanks for your reply. This is my situation. My program fetches the record from DB2 table using cursor and process it and writes to the output File. One day due to large volume of data the job failed with S0C4 protection exception error. We restarted the job and made it to write to new output File. when we combine both the files, we found that we missed abt 400 records. but in DB2 that 400 records were updated as processed.
I want to know how this 400 records were missed and how this 400 records in the table updated as processed since we update the table record by record only after it is written to the output file(we also checked the File Status after each write)
Do u meant to say all this 400 records were buffered before writing to the output file. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
It sounds as though the checkpoint/recovery/restart process is not thorough enough.
How is your application set to syncronize ALL of the activity that was gong on at the time of an abend? Printed reports and qsam output that are partway created when an abend occurs are often incomplete and need extra effort to be put back in sync. For example, if the job was 2/3 done, and the report carries cumulative totals, how do you deal with completing the report with the proper final totals?
The database will (almost) always be intact because checkpoint/rollback have been very well tested in the database engine. The same is not necessarily true of the various applications that use the database system. Sometimes an applicaton does not use a good chcekpoint scheme or the people doing the restart are not clear on how to restart - hence the "almost".
The 400 records may have been buffered - how many records fit into a physical block. In order to avoid this kind of situation as well as the "final totals" issue in reports, some applications plan their restart to re-process the entire run from the beginning completely re-creating the report(s) and output file(s). Database updates are resumed when the restart point is reached. You will need to determine what process(es) will work in ALL cases for your specifics. |
|
Back to top |
|
|
|