You can specify the Disposition as cataloged in normal and abnormal terminations,So you can check your out put.Using some tools like Abend-aid you can easily locate this errors,if it is so tedious to track the programs.You can also check the class.
For my $.02, the operator should not normally cancel jobs. If a job needs to be canceled for some reason, the data usually needs to be re-run from the beginning.
In the same scenario, If am updating the database for each read from the input file and also performing commit after each update, there will be some mismatch with the database updates happened and the number of records written to the output file.
For each input record read, I thought of open/write/close the output/error files. It seems to be this will work.
I need some other way to handle the above scenario to handle S222/S522 Abends from Cobol code/Any other utilities.
Joined: 12 May 2008 Posts: 3 Location: Pune, India
I don't think it is a good idea to open/close file for each record if you are concerned with the performance of the job.
When you say there is a mismatch between the records updated in database and records written to the file, is there a consistency. What I mean is - Are the number of records written in the file always less/more/1 less, etc?
Explicit closing of files is also not required, though recommended. So losing entire data (if already written to the file) due to a job failure seems strange.
If you have the set up available in test, I would recommend put displays after each write statement is executed then cancel the job. Then cross check the display and the actual records in the file.
So long the database updates are concerned, they should not be lost if your checkpoint logic is correct.
As I recall, the system abend routines close all open files thereby emptying the buffers and making all O/P data available at EOJ. Use ,CATLG,CATLG as Hari suggested and you s/b able to access your data.