Joined: 07 Feb 2009 Posts: 1315 Location: Vilnius, Lithuania
I've got a job consisting of 54 steps, 18 of them executing a PGM, 36 of them executing a PROC, which in turn contains 26 steps executing a PGM... Needlessly to say, the job actually runs as four jobs, and that, at the moment, for 179 different input datasets... (So 716 jobs, ouch...)
Now it would be nice if I could reduce it to just 19 steps, merging the 36 PROCs into a nice looping bit of REXX, and some preliminary testing seems to confirm that this is indeed very well possible. However, the JCL uses sh*tload of SET VAR=value statements, and I don't really want to pass sh*tloads of parameters to the REXX exec, so I've been trying to find a smart way of encoding them, as there doesn't(?) seem to be any way of actually accessing them via control blocks, which makes sense as they're only needed at the JCL conversion stage.
FWIW, the job currently runs 36 versions of a program that produces 6 output files (it used to produce 4), and the steps following the EXEC PGM=version[NN] uses SuperC to zap the output datasets of "version[NN-1]" if they are the same as the current ones, or saves them into a PDS(E). For what it's worth here are the abbreviated JCL:
As you can see, the number of SET VAR=value statements is pretty large, and even more (my default libraries) are included in the "INCLUDE MEMBER=IMOI" member. I could obviously put all of them into a text member that is read by the replacement REXX exec, but that would mean that I need to update two datasets if I ever want to port these jobs to another system. I have thought about a dirty trick, adding a
Code:
//DUMMY DSN=&CNTL(this-member),DISP=SHR
and read that in inside the replacement REXX exec, parsing out the relevant SET statements.
Any other thoughs on this?
I'm also curious as to the efficiency of running this as a straight BATCH job (as it is at the moment), or as a job that runs ISPF in batch - at one site another smart-ass programmer thought using an edit-macro in batch was an ideal way to massage some output, but after just one run in production that approach was abandoned do to an excessive CPU usage. Here the REXX exec would only be allocating files (rather a lot...), running the program to be tested, ISRSUPC and SORT, be it under ISPF...
And as a PS, the current JCL member is submitted by an edit macro that updates the three 'XXX' strings to 001..178, the 178 input files and with a slight delay (0.5 sec) between the four jobs that make up one test, to avoid starting a later one before an earlier one.
and parse the DSN obtained via LISTDSI() and with REXX it's obviously far easier to allocate different SYSIN members for ISRSUPC for those n+1 versus n compares where I need to mask "irrelevant" changes.
I find thatI'm using "that" system a lot less than the same 1.10 system you also seem to be using.
"alloc f("ll.i") da("!g.0dsn_n.i") " ||,
"new reu " ||,
"dsorg(ps) " ||,
"space(180,60) " ||,
"recfm(f b) lrecl(121) blksize(0)"
They should be equivalent, but they are not!
The dataset in the JCL is not opened until version 36 of the program and at that stage it's compared to the previous one (which of course was never opened). In both cases ISRSUPC (SuperC) doesn't care, and it produces a file with just I(nserts) as differences and ends with an RC=1. This signals that the process needs to change the old file and that's done using SORT.
And there the difference occurs, in the JOB, SORT falls over with an RC=16
Code:
ICE043A 3 INVALID DATA SET ATTRIBUTES: SORTIN BLKSIZE - REASON CODE IS 12
as the dataset was never opened.
However, the datset allocated in the REXX exec (both with, and without the "DSORG(PS)") is OK, and SORT copies an empty dataset.