IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Create dynamic gdg generations based on merged input file


IBM Mainframe Forums -> DFSORT/ICETOOL
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Tue May 20, 2014 8:13 am
Reply with quote

Hi All,
I have an input file whose attributes and records are as below :

Attributes :

Code:

Organization  . . . : PS   
Record format . . . : VB   
Record length . . . : 2004
Block size  . . . . : 27998


Records ;

Code:

           2014052020140521 ABCDEFGH14133003         
  Ø        BP  @ %  IB ð  VTS  * <  Ø  à
  c*  @    DC  @ %  IB ð  VTS  * <  c* Ë
  @ q°@ ð  DE  j¸  @  <1405130016798991 % @1
           000000000000005  ã  oeð  oeý     
           2014052020140521 ABCDEFGH14133002         
  <        AP  * <  j  æ @   é 
  j       AP *  * <  <  æ @   é 
  < ÁΠ    AP  < * <  < h @ * æ @   é 
           000000000000005   kÈ  åË&  åË&     
           2014052020140521 ABCDEFGH14133001         
  Í  n*    PC @  Í  * <  Í  r * æ æ @   é 
  Í  m     PC @  p_  Í  * <  Í  r * æ æ @   é 
  @ q°@ ð  DE  @  <1405120016788573 % @1
           000000000000005   Øp  ÍmÍð  ÍmÍý 


The input file is actually a merged output file created by running a sort on a gdg base which had 3 generations.
The sort changed the header of all the input generations to the dates which were passed dynamically to it.

Now the current header for the File 1 in the example is 2014052020140521 ABCDEFGH14133003
ABCD with starting location of 52 can be used to indentify a header record.

The trailer record for the File 1 in the example is 000000000000005 ã oeð oeý
00 with a starting location of 35 can be used to indentify a trailer record.

The records between the header and the trailer are detail records.

The file is a VB file so if we want to sort it we need to have the current locations added by 4 bytes.

Query : Since the above input files has 3 merged generations, I need to have sort card which will split this input file into 3 generations and create them dynamically.
In the example the input file is a merged file which had 3 generations. So 3 generations of the GDG should be created.

The merge depth of the input file may vary and the same input file can have 5 or 10 generations merged into it as well.
The gdg generations should be created automatically to the no of merged generations in the input file.

I have tried to be as bried and lucid as possible. I can explain further if need be.

Would be really glad if any body can help in this matter. Thanks in advance. Cheers.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Tue May 20, 2014 10:54 am
Reply with quote

Can you show the SORT step which created the file?

I can't see that this data is MERGEd. Do you mean concatenated?

Why has the data been "put together" only to be required to be split again? Is the data updated after it is put together?
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19243
Location: Inside the Matrix

PostPosted: Tue May 20, 2014 6:45 pm
Reply with quote

Hello and welcome to the forum,

As Bill asks - why is the data combined only to be separated again.

Please explain what business requirement this supports. Please do NOT say "it is the requirement". Just because someone has chosen this route, this does not mean there is a business requirement to do so.

If you explain the process, we may have other suggestions.
Back to top
View user's profile Send private message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Wed May 21, 2014 4:41 am
Reply with quote

Business Scenario :

In our installation, we have the mainframe core batches running in all the regions(dev,system and uat), the mainframes as a HOST receives files from system A on a daily basis.
These files are added to a GDG base everyday so there are a lot many...Each Generation of this GDG has a header record, detail record and a trailer record which I have already shown.

Today being 21st May, 2014 if a file 1 from system A is sent to the host, it will create a new generation to the GDG base which will be of the below data :
Code:

.....................         2014052120140522 ABCDEFGH14133003                 
...Ø.......                   BP..............@.%..IB.ð..VTS .*.<...Ø..àÎ*...æ.*
...c*...@..                   DC..............@.%..IB.ð..VTS .*.<...c*.Ëé@...æ.*
....@.q°@.ð                   DE........j¸....@...<1405130016798991.%.@1098     
.....................         000000000000005  ã........oeð.....oeý


Again, if a file 2 from system A is sent to the host, it will create a new generation to the GDG base which will be of the below data :
Code:

.....................         2014052120140522 ABCDEFGH14133002                 
....<......                   AP................*.<......jÂ....æ.@.. é..æ.... é.
......jÂ...                   AP.*..............*.<....<.......æ.@.. é..æ.... é.
....<.ÁÎ...                   AP..............<.*.<....<.h.@.*.æ.@.. é..æ.... é.
.....................         000000000000005  .kÈ.....åË&.....åË&.


and so on....
The dates in the header are

20140521 - current processing dates
20140522 - next business day

Now ideally many files like the above are sent to the host from System A daily...and they go on accumulating on the GDG base on the host.

We have two cases here :

Case 1 - During the actual run of the core mainframe batch(say if we are planning to run the core batch today on 21st May, 2014, the files have the correct dates on them and the core jobs
which process these files run to good end of job.

Case 2 - But if we plan to skip the core batch for today, the files lie as it is on the gdg base unprocessed.

If we also plan to skip the batch tomorrow, i.e. on 22nd May, files are getting accumulated with a change in the header as below :

2014052220140523 ABCDEFGH14133030

Now if we plan to skip the batch for the entire week and execute the batch on 26th May, 2014 which is a working day and a Monday,

all the files to be processed correctly should have the header date as : 2014052620140527 ABCDEFGH14133003


Current work which I am doing :

In order that that dates should be the current and next business dates on all the generation, i have to manually go and edit the files and change the dates
on the header of each file to current processing date and the next business dates. The more the days we skip the batch the more the no of files.

My approach to do this automatically :

1.Incuded the member PROCNBUS to get the dates and pass them to the UTVSCTL utilty to build the dynamic sort cards
2.Give the GDG base as input to the sort and concatenate all gdg generations to a single file and run the sort on it to change the processing date on all header records.
3.Give the output file of step 2 as input in step 3 and run the sort on it to change the next business date on all header records.
4.The output of step 3 is a concatenated file with all the dates in sync.
5.Need a sort card here to split this concatenated input file and create 1 gdg generation for each concatenated file which will have the correct dates and can be used by the core batch jobs to process.

JCL :
Code:

//AAAAAA01 JOB ,'NO BATCH AUTOMATION',CLASS=H,MSGCLASS=A,NOTIFY=AAAAAA 
//*                                                                     
//* INCLUDING THE PROCESSING AND THE NEXT BUSINESS DATES IN THE JOB     
//JCLLIB   JCLLIB ORDER=AAAAAA.JCL.UTILITY                             
//PDATES   INCLUDE MEMBER=PROCNBUS                                     
//* IDCAMS PRE-DELETE STEP TO DELETE THE DYNAMICALLY CREATED FILES     
//DELETE  EXEC PGM=IDCAMS                                               
//SYSPRINT  DD SYSOUT=*                                                 
//SYSIN DD *                                                           
 DELETE 'AAAAAA.JCL.UTILITY.SORTCARD.PROCDT'                           
 DELETE 'AAAAAA.JCL.UTILITY.SORTCARD.NBUSDT'                           
 DELETE 'AAAAAA.OUTPUT.FILE.PROCDT'                                     
 DELETE 'AAAAAA.OUTPUT.FILE.NBUSDT'                                     
/*                                                                     
//******************************************************************   
//*UTVSCTL - SYMBOLIC SUBSTITUTION TO CHANGE THE DATE DYNAMICALLY  *   
//*TO CREATE A NEW SORT CARD EACH TIME FOR PROCESSING DATE         *   
//******************************************************************   
//SUB1     EXEC  PGM=UTVSCTL,                                         
//     PARM=('PROCDT=&PROCDT.')                                       
//CARDIN   DD DSN=AAAAAA.JCL.UTILITY(DYNCARD1),DISP=SHR               
//CARDOUT  DD DSN=AAAAAA.JCL.UTILITY.SORTCARD.PROCDT,                 
//            DISP=(NEW,CATLG,DELETE),                                 
//            DCB=(LRECL=80,RECFM=FB),                                 
//            SPACE=(TRK,(1,1),RLSE)                                   
//*                                                                   
//******************************************************************   
//*UTVSCTL - SYMBOLIC SUBSTITUTION TO CHANGE THE DATE DYNAMICALLY  *   
//*TO CREATE A NEW SORT CARD EACH TIME FOR NEXT BUSINESS DATE      *   
//******************************************************************   
//SUB2     EXEC  PGM=UTVSCTL,                                         
//     PARM=('NBUSDT=&NBUSDT.')                                       
//CARDIN   DD DSN=AAAAAA.JCL.UTILITY(DYNCARD2),DISP=SHR               
//CARDOUT  DD DSN=AAAAAA.JCL.UTILITY.SORTCARD.NBUSDT,                 
//            DISP=(NEW,CATLG,DELETE),                                 
//            DCB=(LRECL=80,RECFM=FB),                                 
//            SPACE=(TRK,(1,1),RLSE)                                   
//*                                                                   
//*********************************************************************
//* STEP TO CONCATENATE ALL THE GDG GENS INTO A SINGLE FILE         ***
//*********************************************************************
//*                                                                   
//STEP20  EXEC PGM=SORT,REGION=1024K                                   
//SYSOUT    DD SYSOUT=*                                               
//SORTIN    DD DSN=AAAAAA.WON8.S.U06B.STXNS,DISP=SHR                   
//SORTOUT   DD DSN=AAAAAA.OUTPUT.FILE.PROCDT,                         
//             DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,                     
//             SPACE=(CYL,(1,4),RLSE),                                 
//             DCB=(RECFM=VB,LRECL=2004,BLKSIZE=0)                     
//SYSIN     DD DSN=AAAAAA.JCL.UTILITY.SORTCARD.PROCDT,DISP=SHR         
//*                                                                   
//*********************************************************************
//* USE THE FILE CREATED IN STEP 20 TO CREATE THE FINAL FILE        ***
//*********************************************************************
//*                                                                     
//STEP30  EXEC PGM=SORT,REGION=1024K                                   
//SYSOUT    DD SYSOUT=*                                                 
//SORTIN    DD DSN=AAAAAA.OUTPUT.FILE.PROCDT,DISP=SHR                   
//SORTOUT   DD DSN=AAAAAA.OUTPUT.FILE.NBUSDT,                           
//             DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,                     
//             SPACE=(CYL,(1,4),RLSE),                                 
//             DCB=(RECFM=VB,LRECL=2004,BLKSIZE=0)                     
//SYSIN     DD DSN=AAAAAA.JCL.UTILITY.SORTCARD.NBUSDT,DISP=SHR         
**************************** Bottom of Data ****************************

member PROCNBUS is as below :
Code:

VIEW       AAAAAA.JCL.UTILITY(PROCNBUS) - 01.02            Columns 00001 00072
Command ===>                                                  Scroll ===> CSR 
****** ***************************** Top of Data ******************************
000001 //PROCDT SET PROCDT='20140521'                                         
000002 //NBUSDT SET NBUSDT='20140522'                                         
****** **************************** Bottom of Data ****************************

sortcard DYNCARD1 is as below :
Code:

VIEW       AAAAAA.JCL.UTILITY(DYNCARD1) - 01.42            Columns 00001 00072
Command ===>                                                  Scroll ===> CSR 
****** ***************************** Top of Data ******************************
000100  SORT FIELDS=COPY                                                       
000200  INREC IFTHEN=(WHEN=(52,4,CH,EQ,C'AAAA'),                               
000300        OVERLAY=(35:C'&PROCDT.'))                                       
****** **************************** Bottom of Data ****************************

sortcard DYNCARD2 is as below :
Code:

VIEW       AAAAAA.JCL.UTILITY(DYNCARD2) - 01.42            Columns 00001 00072
Command ===>                                                  Scroll ===> CSR 
****** ***************************** Top of Data ******************************
000100  SORT FIELDS=COPY                                                       
000200  INREC IFTHEN=(WHEN=(52,4,CH,EQ,C'AAAA'),                               
000300        OVERLAY=(43:C'&NBUSDT.'))                                       
****** **************************** Bottom of Data ****************************


@Bill : The data is not merged, it is concatenated. I apologize for using the wrong word. No specific reason here for the data to be put together here. The data is not updated in anywayexcept for the change in the header dates which is what we desire.
@Dick : I have listed out the requirement and my attempt to it in detail. Let me know if you need any further help in understanding the issue.

I thank you both for taking the time out to understand this in advance. Cheers.
Back to top
View user's profile Send private message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Wed May 21, 2014 4:45 am
Reply with quote

The sort cards DYNCARD1 and DYNCARD2 check for the characters AAAA in the input record. The check can be for ABCD as we have the characters ABCD starting from position 52. I missed to change them. Hence the clarification in advance. Cheers. Many thanks.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Wed May 21, 2014 12:04 pm
Reply with quote

Thanks for the extensive update.

No, you shouldn't really be doing it like that.

Somewhere you/the client have an Audit department. You may also have a Compliance department. You definitely have an Accounting department. You may have other interested parties. At least two of these will be very keen that you do not do it that way.

There is a purpose to the business date on the header records. It is to (attempt to) ensure that the same data is not processed twice or that otherwise "unexpected" data does not get processed.

You cannot circumvent that process by just changing the date on the header record. If you tell that to someone from the above departments, their jaws should drop, and then they'll either treat it as a good joke, or they'll go all pale thinking that you may mean it, and it may even be already implemented.

You do have a requirement, which is in itself acceptable (assuming the above are aware of it), although I've not come across it that way before (a week with no processing? A government client?).

If a valid business requirement, you need to build it in to your processing system. Where the header is processed, there needs to be some mechanism included to, only when required, allow for this. To take a specific file from a specific data/sequence and process it on the business date for the current run.

It must not allow the processing of a random file with an earlier date, or any file with a later date, but multiple (potentially) instances where there is specific data which requires its business date to be "overridden".

Where there is such data, the fact of its acceptance needs to be reported, both along with the original acceptance report, and on a new exceptions report (which should appear daily, with a line saying "no exceptions to report" if that is the case).

Someone (senior) from at least one of the above departments and at least one other department needs to sign the processing and exception reports when there are exceptions.

Unless the system is something trivial (stock-maintenance for your company's stationery cupboards, or suchlike) I don't regard any short-cuts as being possible.

I hope you have a pile of signatures when you edit the dates manually.

I hope that they do not let you edit the dates automatically. It is so error-/fraud-prone. You have (some level of) controls on your data, and then you just ignore them. Can't be done like that.
Back to top
View user's profile Send private message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Thu May 22, 2014 3:59 am
Reply with quote

Dear Bill,

This change is for the lower test environments. I am aware that the current processing date and the next business date have a purpose.
The files have these dates on the header so that the programs which process the files do not fail if all the gdg generations have the same processing and next business date on the header record. They will fail if there is a mismatch for sure.

And with regards to the same data, there are other ways in which this is handled.

I am not trying to circumvent any existing process as there is none at the moment. There is no process yet to to make a change automatically
to set all the headers in all the files in sync in the lower test regions. The prod environment is a different case as such a scenario might never occur there as the production batches execute daily and skipping them is out of question. So no date difference on header in gdg generations so this is not required.

Batches are skipped in the lower regions due to many reasons which I cannot mention here. But trust me they are skipped.
And when this skip happens, files accumulate on the gdg and I have to manually change the dates in all the headers of the files so that they are ready to be picked up in the core batch whenever it is planned.

The requirement is clear. And I have arrived at a solution of concatenating all the files in a single dataset and changing the processing and next business dates on the way. Its just that I am not aware as to how to split the input file on a condition so that we can get 2 generations if 2 files have been concatenated in the input file and n generations if n files have been concatenated. Request you to let me know if this is possible or if we have a better approach than this.

Your opinion is welcome and I am looking forward to your advice. Many thanks in advance.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Thu May 22, 2014 4:33 am
Reply with quote

OK, thanks again for the update.

I think you are only concatenating the data as a convenient way to process it all in one step. What is inconvenient about that approach is splitting it afterwards, in the sense that if there is no processing need for the concatenation, why do it, just to want to undo it immediately.

Of course, doing things differently lower down the chain than they way they are done in Production introduces potential issues, but you are aware of them.

I'd use LISTCAT to list all the absolute generation numbers, parse the output (Rexx or SORT or anything you like) and generate multiple (if needed) jobs/steps with one file in/one file out. You can consider using the INTRDR to process the JOB(s) or SUBMIT them from a dataset.

I think this will be a lot neater, require no "clean up" (which using multiple OUTFILs, some left without data, potentially, would require) and since it is/can be automated/semi-automated it should not be a problem that there are multiple jobs/steps.

Are you going to use the same GDG for the output?
Back to top
View user's profile Send private message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Thu May 22, 2014 5:03 am
Reply with quote

Thanks for getting back Bill. I appreciate it.

Yes I am concatenating the data as a convenient way to process it all in one step.

I, to an extent, agree to your statement -

"What is inconvenient about that approach is splitting it afterwards, in the sense that if there is no processing need for the concatenation, why do it, just to want to undo it immediately."

But, you see, we are actually successful in changing the processing and next business dates as per the requirement using the sort cards.

Now the only thing which remains is to split it and get the gdg generations.

Ideally what i would do is,

1. Take the input gdg, create the concatenated file and add a step to delete all the generations of the input gdg.
2. Do our sort processing of changing the processing and next business dates on the concatenated file.
3. Take the same gdg as an output base and create the required generations based on the no of concatenated input files.

Step 3 sounds pretty simple to say, but is quite tricky here. And this is where I need help.

Could you please list out an example for the LISTCAT you mentioned about. I will go thru that for sure. Could you also suggest a way to get the GDG generations dynamically using the current approach too.

Many thanks.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Thu May 22, 2014 11:59 am
Reply with quote

Yes, step 3 is the tricky one. Which is why I'm suggesting not doing it like that.

From your description, it seems that you will have about 1-5 generations. Perhaps mostly one, rarely five (can't be more than 10 times per year, for instance, and probably many fewer).

Without generating the control cards, you'd need to code a fixed number of OUTFIL statements (say 10) and then have something after that step which identifies how many of the generations were actually used, and deletes those which weren't used.

If a physically-empty dataset were possible (shouldn't be as you have headers and trailers) then this would go "missing" in the concatenation/split process.

You'd have 1-5 extra JOBs or steps. But it would be simpler, and cleaner.

With what you have already, do you know how many datasets (generations) you have for a given run? You could avoid the tidy-up if you know, by generating the JCL and control cards. Then you only need the OUTFILs which are actually required, obviating the clean-up. But, also, if known, makes the generation of control cards-only for the simpler version.

Remember, if most times you are just missing out one batch, the processes are logically the same (one file in, one file out). Estimate how many times a year you will do multiple files. There should be no/little more manual work for multiple steps/JOBs, so the whole concatenate/split is only going to be relevant however-many-you-estimate times a year - what is the benefit you get?

If you don't want to generate the cards for the split (but still want the split) you have to do the LISTCAT/parse afterwards, and delete the unnecessary files.

To do the split, look at IFTHEN=(WHEN=GROUP with BEGIN for your header identifier, and END for your trailer identifier.

Because they are not in the same position on the record, and because there does not appear to be a data-identifier, you need to be happy that there can be no accidental hits, there shouldn't be if the header is identified in the existing processing, rather than just being the first record. Same with the trailer, identified vs last.
Back to top
View user's profile Send private message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Tue May 27, 2014 8:29 am
Reply with quote

Bill,

Sorry, it took me a while to get back.

Coming to your query, When the batches are skipped...to what i have seen in this shop, the max no of gens the GDG base can have is 15 or 20.
If the batch is skipped for a very long time, the dates on the SYSTEM A are changed/reset manually to what are desired with
respect to the host. For now on we can safely assume that the gens will not increase beyond the number 20.(editing 20 files manually takes 5 mins at a minimum)

Yes, a physically empty datset is not possible(it will have a header and a trailer within it along with detail, if only a few records)

One more thing here, for a single run, we cannot guarantee that how many generations would be there, 2 or 5 or 8 or 12 (we have kept a max cap of 20)
Hence it is bit difficult here.

Looks like when you talk about the split, you mean the sorted file with the correctly put processing and next business dates.
Could you please let me know the process of the SPLIT as you suggessted using the IFTHEN=(WHEN=GROUP with BEGIN for the header identifier, and END for the trailer identifier.
I will try my hand at it.
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Tue May 27, 2014 12:18 pm
Reply with quote

OK, if you have a maximum of 20 a day, and you want to deal with, say, five days of delays, you're going to have to have 100 OUTFIL statements.

You don't use SPLIT or its cousins, because that is not the way they work (check the documentation).

You use INCLUDE= on the OUTFIL.

The INCLUDE= tests an "ID" (a group-level sequence number that all records of a group can be marked with). First OUTFIL is INCLUDE= for the ID of one, second for two, etc.

To make the groups and get the ID:

Code:
 INREC IFTHEN=(WHEN=GROUP,
                 BEGIN=(identification of header),
                 END=(identification of trailer),
                 PUSH=(recordextension:ID=3))


recordextension depends on whether V or F records. You have V, so you have to extend at the from of the record:

Code:
 INREC IFTHEN=(WHEN=INIT,
                 BUILD=(1,4,3X,5))


This keeps an RDW (required for V), adds three blanks, copies all the data from input record position five to the end of the record.

Putting those two together will get something like this (untested):

Code:
 INREC IFTHEN=(WHEN=INIT,
                 BUILD=(1,4,3X,5)),
       IFTHEN=(WHEN=GROUP,
                 BEGIN=(identification of header),
                 END=(identification of trailer),
                 PUSH=(5:ID=3))


In the OUTFIL you will have to drop the ID, so each OUTFIL will need BUILD=(1,4,8).

You will need to allocate 100 DDNames in your JCL, with their DSNs as a GDG with relative generations on.

You will need to, after your step runs, delete all the absolute generations which contain no data.

There are various ways to identify these, but if you want a count of OUTFILs used, you can add another OUTFIL (which will not affect the count) with REMOVECC,NODETAIL and TRAILER1=(5,3), something like that.

By this time you may be saying "goodness, that's not really how I imagined it would be". It's why I think the more steps/less code is a so much better way to do it. However, the above outline should get you close.
Back to top
View user's profile Send private message
Aditya Prakash Joshi

New User


Joined: 09 Sep 2013
Posts: 7
Location: New Zealand

PostPosted: Tue Aug 19, 2014 5:11 am
Reply with quote

Hi All, Firstly apologies for my late reply. Here is what I tried out using Fileaid in Batch mode. Works good for me.

Here is the code to do that :

JCL---

Code:

//ABCDEF01 JOB ,'NO BATCH AUTOMATION',CLASS=H,MSGCLASS=A,NOTIFY=ABCDEF 
//*                                                                     
//* SET THE PROCESSING AND NEXT BUSINESS DATES ON THE SET STATEMENT     
//*                                                                     
//         SET PRCDATE='20140819'                                       
//         SET NBDATE='20140820'                                       
//* IDCAMS PRE-DELETE STEP TO DELETE THE DYNAMICALLY CREATED CARDS     
//DELETE  EXEC PGM=IDCAMS                                               
//SYSPRINT  DD SYSOUT=*                                                 
//SYSIN DD *                                                           
 DELETE 'ABCDEF.JCL.DATECRD1'                                           
 DELETE 'ABCDEF.JCL.DATECRD2'                                           
/*                                                                     
//SUB1     EXEC  PGM=UTVSCTL,                                           
//     PARM=('PRCDATE=&PRCDATE.','NBDATE=&NBDATE.')                     
//CARDIN   DD DSN=ABCDEF.JCL.UTILITY(DTCARD1),DISP=SHR                 
//CARDOUT  DD DSN=ABCDEF.JCL.DATECRD1,                                 
//         DISP=(NEW,CATLG,DELETE),                                     
//         DCB=(LRECL=80,RECFM=FB),                                   
//         SPACE=(TRK,(1,1),RLSE)                                     
//SYSPRINT DD SYSOUT=*                                                 
//SUB2     EXEC  PGM=UTVSCTL,                                         
//     PARM=('PRCDATE=&PRCDATE.')                                     
//CARDIN   DD DSN=ABCDEF.JCL.UTILITY(DTCARD2),DISP=SHR                 
//CARDOUT  DD DSN=ABCDEF.JCL.DATECRD2,                                 
//         DISP=(NEW,CATLG,DELETE),                                   
//         DCB=(LRECL=80,RECFM=FB),                                   
//         SPACE=(TRK,(1,1),RLSE)                                     
//SYSPRINT DD SYSOUT=*                                                 
//* FILEAID STEP TO REPLACE THE PDATE AND NBD IN ALL THE GENERATIONS   
//* OF THE GDG - ABCDEF.S.U06B.STXNS.G*                                 
//* * DENOTES THE REGION  - "N" FOR INTG, "C" FOR SYST, "U" FOR UAT   
//FILEZZ1  EXEC  PGM=FILEAID,COND=EVEN                                 
//SYSPRINT DD    SYSOUT=*                                             
//SYSLIST  DD    SYSOUT=*                                             
//SYSTOTAL DD    SYSOUT=*                                             
//* THE NO OF DD** BELOW SHOULD BE EQUAL TO THE NO OF GENERATIONS     
//* WHICH ARE THERE ON THE GDG BASE. ADD OR DELETE AS APPROPRIATE     
//DD00     DD    DSN=ABCDEF.S.U06B.STXNS(0),DISP=SHR                   
//DD01     DD    DSN=ABCDEF.S.U06B.STXNS(-1),DISP=SHR                 
//DD02     DD    DSN=ABCDEF.S.U06B.STXNS(-2),DISP=SHR                 
//DD03     DD    DSN=ABCDEF.S.U06B.STXNS(-3),DISP=SHR                 
//SYSIN    DD    DSN=ABCDEF.JCL.DATECRD1,DISP=SHR                     
//*                                                                     
//* FILEAID STEP TO REPLACE THE PDATE IN THE CURRENT GENERATION         
//* OF THE GDG - ABCDEF.S.C02B.TRANS.CNTL(0)                             
//* * DENOTES THE REGION  - "N" FOR INTG, "C" FOR SYST, "U" FOR UAT     
//FILEZZ2  EXEC  PGM=FILEAID,COND=EVEN                                 
//SYSPRINT DD    SYSOUT=*                                               
//SYSLIST  DD    SYSOUT=*                                               
//SYSTOTAL DD    SYSOUT=*                                               
//DD00     DD    DSN=ABCDEF.S.C02B.TRANS.CNTL(0),DISP=SHR               
//SYSIN    DD    DSN=ABCDEF.JCL.DATECRD2,DISP=SHR                       
//*                                                                     


DTCARD1 - Finds the string XYZABCDE in the file and changes the date to the dates passed in PRCDATE and NBDATE
Code:

************************************************************************
* THIS CARD CONTAINS THE PDATE AND NBDATE                               
************************************************************************
$$DD00 UPDATE STOP=(005,NE,X'00000000'),                               
            REPL=(52,EQ,C'XYZABCDE',35,C'&PRCDATE.'),                   
            REPL=(52,EQ,C'XYZABCDE',43,C'&NBDATE.')                     
$$DD01 UPDATE STOP=(005,NE,X'00000000'),                               
            REPL=(52,EQ,C'XYZABCDE',35,C'&PRCDATE.'),                   
            REPL=(52,EQ,C'XYZABCDE',43,C'&NBDATE.')                     
$$DD02 UPDATE STOP=(005,NE,X'00000000'),                               
            REPL=(52,EQ,C'XYZABCDE',35,C'&PRCDATE.'),                   
            REPL=(52,EQ,C'XYZABCDE',43,C'&NBDATE.')                     
$$DD03 UPDATE STOP=(005,NE,X'00000000'),                               
            REPL=(52,EQ,C'XYZABCDE',35,C'&PRCDATE.'),                   
            REPL=(52,EQ,C'XYZABCDE',43,C'&NBDATE.')                     


DTCARD2

Code:

************************************************************************
* THIS CARD CONTAINS THE PDATE                                         
************************************************************************
$$DD00 UPDATE REPL=(92,C'&PRCDATE.')                                   


The date cards are created dynamically each time from the dates passed in the SET statments by using the UTVSCTL utility.

In Step FILEZZ1, the dates are changed automatically for the specified no of generations in DD** (Passing the GDG base name to the fileaid does not work as Concatenated DDs are not supported in fileaid. Since this is not a regular thing and needs to be done only when we skip the core batch, we did not bother about the no of generations...just see the gens in 3.4 and make changes to your job and cards respectively.)
In Step FILEZZ2, the dates are changed for the current generation.

Files are as below :

ABCDEF.S.U06B.STXNS(0), Record format . . . : VB (date starts at column 31 and string XYZABCDE at column 48)

Code:

----+----1----+----2----+----3----+----4----+----5----+----6----+----7--
***************************** Top of Data ******************************
                                          2014081120140812 XYZABCDE13330003         
  < Í %                     AP *  < * <  * Ç ð  æ @  %
  <  æ                     AP *  -  < * <  <  a@  æ @  %
  æ Ë                      AP  <  < * <  <  *  æ @  %
  <  *                     AP *  (  < * <  æ Ë   æ @  %
  < á @                     AP *  < * <  Ç  @  æ @  %
  < á @                     AP  < * <  < á @  æ @  %
  < á @                     AP *  < * <  < á @  æ @  %
  < á @                     AP *  &  < * <  ° ð  æ @  %



ABCDEF.S.C02B.TRANS.CNTL(0), Record format . . . : FB (dates need to start from column 92)

Code:


-3----+----4----+----5----+----6----+----7----+----8----+----9----+----0
***************************** Top of Data ******************************
                        ÑÊÅ            Y  g%  ÑÊÅ  @                         20140811
                        <  ß@           Y  ð  ß@                                20140811
                        <  mb @           Y  æ  mb @                        20140811
                        %           Y  <  %  *                                   20140811
                        ð  k             Y  @  k  *                                20140811
                        ìi*           Y  *  ìi*  @                                   20140811
ED.20140808.134146                 Y                                       20140811
ED.20140808.134223                 Y                                       20140811
ED.20140808.134234                 Z                                       20140811
                        ßÊ             N  È<  ßÊ  Î<                             20140811



Please make appropriate changes to the file positions when you code your cards. I tried to move them to their positions in the file. Hope this helps. Cheers.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DFSORT/ICETOOL

 


Similar Topics
Topic Forum Replies
No new posts Unable to interpret a hex value to De... COBOL Programming 4
No new posts COBOL sorting, with input GDG base COBOL Programming 7
No new posts Sort based on the record type DFSORT/ICETOOL 1
No new posts Concatenate 2 input datasets and give... JCL & VSAM 2
No new posts how to eliminate null indicator value... DB2 7
Search our Forums:

Back to Top