We have a requirement where in we have a input file and two output files.
The input file is of LRECL= 300,FB and the layout looks like the one below:
67HXXXXXXXXXXX ---> length = 30 - Header
67LXXXXXXXXXXX ---> length = 30 - Leader
67D1XXXXXXXXXXXXXXXXX ---> length = 300 - Data
67SXXXXXXXXXXX ---> length=30 - Summary
(there could be any number of 67L - 67D - 67S set between 67H-67T records.
67TXXXXXXXXXXX ----> length=30 - Trailor
The records are processed sequentially. 67H & 67T and all 67L & 67S will be written to OUT-FILE-2. For each successful processing (DB2 updates) of 67D records there will more that one output records written to OUT-FILE-1. All 67D records that are not processed successfully will be written to OUT-FILE-2, here for each unsuccessful 67D record, the records itself is written to OUT-FILRE-2.
We have two restart scenarios:
1. Abend happens before first commit
2. Abend happens after atlease one commit process
1. Abend happens before first commit: In this case, we might have written a few records to both OUT-FILE-1 & OUT-FILE-2. When we restart the JOB, it starts processing from TOP of the input file and those records that are written to the OUT-FILE-1 & OUT-FILE-2 will again posted in those files. This causes duplicates and some times mismatch of 67L-67D-67S set of records.
2. Abend happens after atleast one commit process: At the time of commit we would post all the counts i.e. count of INPUT file, OUT-FILE-1 & OUT-FILE-2. After a commit, there could be records written to both OUT-FILE-1 & OUT-FILE-2 and program gets abended. In such a case, during restart, we would use the COUNTs in the restart file and start processing from there on. Still there could be a chance of duplicate records written to output files. (duplicates for records written after the last commit and the abend)
We discussed some of the possible options for avoiding the duplicates, but none seems to be optimal.
Below are two of the solutions that we thought that might be suitable:
1. We thought of using an array for each output file. We would write the output records to these arrays as in when we process and during commit we would write the arrays to the output files.
This option was ruled out because number of output record for each successfully processed records is not constant.
2. We thought of using a temporary file for each output file and write the output records to the temporary file as in when we process and during commit we would write the records from the remporary file to the respective output files.
There are some other optioins we thought and discussed. Above are the two which scored more compared to the other options.
If any of you have better optimal approach, please share with us.
I searched through a few materials and forums but none seems to be optimal (atleast for me) for this requirement. If one of the above two is good, please advice if any improvements can be done to that.
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
First - fix whatever causes the abends. . . If the code is so poor that it regularly abends, this should be considered critical.
Suggest you consider letting the job run without checkpoints and it will completely backout if there is an abend. The status/content of the qsam output files would not be a concern - they would be automatically deleted (disp=(new,catlg,delete)).
Joined: 20 Oct 2006 Posts: 6970 Location: porcelain throne
create a table - output_image.
instead of writing to sysout or dataset, output your report to a db2 table.
then your commits will always be in sync.
you have to design the output_image with a key of prgram, fd, counter, etc.
then after a successful eoj, dump the table (minus the keys).
you should have a 'restart' table, where each time you commit you have updated the restart table with the current running totals, etc. so if you are in restart mode, you overlay your working storage with data from the restart table.
involves some thought and design, but if you are not going to make your batch system 'abend proof', you best have a good restart procedure.
It is difficult to make it abend proof as we receive the input file from a vendor. Also, there are chances that the job might abend due to contention as we have other jobs running parallel to this.
(At the same time, we are not expecting abends very frequently....)
Meanwhile we tried out the following restart logic and found it to be working good.
In the normal run, we would be keep writing the output records and at commit point we would update the restart file with the output record count.
Say, the job abended when processing 85th input record and total output records processed and written to the output file is 84. Let us say the last commit has happened at 70th record. So , the DB2 updates happened for reocrds 71 - 84 will be rolled back but the records will be present in the output file.
While restarting, we would read the previous output file (from abended version) until the count in the restart file and write it to a new file. From thereon we would continue the process and write the output to this newfile.
So, by this avoiding the dulpicate records in the output file and at the same time we are processing the records whcih got rolled back.
The advantages are: There is no need to change any GDG numbers while restarting and only when we are restarting we have to do the copying work. Also, this handles any number of abends in the same run of the job.
Hope the explanation is clear.
Please advise any other modifications for this would result in better performance.