IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Rec count for multiple flat files - PS


IBM Mainframe Forums -> JCL & VSAM
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
Skolusu

Senior Member


Joined: 07 Dec 2007
Posts: 2205
Location: San Jose

PostPosted: Fri May 06, 2011 10:28 pm
Reply with quote

kitchu84,

The following DFSORT/ICETOOL JCL will give you the desired results. I assumed that the list file has the DSN name in the first 44 bytes. I also limited the job to 1000 Datasets even though the maximum number of DD statements per job step is 3273. This job takes care of both FB and VB files.

The JCL is generated in step0200. Look at the output from SORTOUT of that step. It should have generated the JCL need to count the records from each file. If the generated JCL looks good then change the following statement and resubmit the job.

Code:

//SORTOUT  DD SYSOUT=*


to

Code:

//SORTOUT  DD SYSOUT=(*,INTRDR)


Code:

//STEP0100 EXEC PGM=SORT   
//SYSOUT   DD SYSOUT=*     
//SORTIN   DD DSN=your 150 byte FB file with DSN names,DISP=SHR
//DDN      DD DSN=&&D,DISP=(,PASS),SPACE=(CYL,(5,5),RLSE)       
//TOOL     DD DSN=&&T,DISP=(,PASS),SPACE=(CYL,(5,5),RLSE)       
//CARDS    DD DSN=&&C,DISP=(,PASS),SPACE=(CYL,(5,5),RLSE)       
//SYSIN    DD *                                                 
  OPTION COPY,STOPAFT=1000                                     
  INREC OVERLAY=(151:SEQNUM,3,ZD,START=0)                       
                                                               
  OUTFIL FNAMES=DDN,                                           
  BUILD=(C'//IN',151,3,12:C'DD DISP=SHR,DSN=',1,44,80:X)       
                                                               
  OUTFIL FNAMES=CARDS,REMOVECC,NODETAIL,BUILD=(80X),           
  SECTIONS=(151,3,                                             
  HEADER3=('//C',151,3,'CNTL DD *'),                           
  TRAILER3=(03:'OUTFIL FNAMES=OUT,VTOF,',                       
               'REMOVECC,NODETAIL,BUILD=(80X),',/,             
            03:'TRAILER1=(''',1,44,'''',',2X,',/,               
            14:'COUNT=(M11,LENGTH=10))',/,'//*'))               
                                                               
  OUTFIL FNAMES=TOOL,REMOVECC,                                 
  BUILD=(3:C'COPY FROM(IN',151,3,                               
           C') TO(OUT) USING(C',151,3,                         
           C')',80:X),                                         
  HEADER1=('//TOOLIN   DD *'),                                 
  TRAILER1=('//*')                                             
                                                               
//*
//STEP0200 EXEC PGM=SORT                                 
//SYSOUT   DD SYSOUT=*                                   
//SYSIN    DD *                                           
   OPTION COPY                                           
//*                                                       
//SORTOUT  DD SYSOUT=*
//SORTIN   DD DATA,DLM=$$                                    
//TIDXXXA  JOB 'COPY',                                   
//             CLASS=A,                                      
//             MSGCLASS=H,                               
//             MSGLEVEL=(1,1),                           
//             NOTIFY=TID                   
//*                                                       
//*******************************************************
//* DELETE THE OUTPUT COUNT DATASET IF EXISTED          *
//*******************************************************
//STEP0050 EXEC PGM=IEFBR14                               
//FILE01   DD DSN=TIDXXX.FILECNT.OUT,                     
//            DISP=(MOD,DELETE,DELETE),                   
//            SPACE=(TRK,(1,0),RLSE)                     
//*                                                       
//*******************************************************
//* COUNT THE NUMBER OF RECORDS IN EACH FILE            *
//*******************************************************
//STEP0100 EXEC PGM=ICETOOL                               
//TOOLMSG  DD SYSOUT=*                                   
//DFSMSG   DD SYSOUT=*                                   
//OUT      DD DSN=TIDXXX.FILECNT.OUT,                     
//            DISP=(MOD,CATLG,DELETE),                   
//            UNIT=SYSDA,                                 
//            SPACE=(CYL,(10,10),RLSE)                   
//*                                                       
$$                                                       
//         DD DSN=&&D,DISP=(OLD,PASS)                     
//         DD DSN=&&T,DISP=(OLD,PASS)                     
//         DD DSN=&&C,DISP=(OLD,PASS)                     
//*
Back to top
View user's profile Send private message
Skolusu

Senior Member


Joined: 07 Dec 2007
Posts: 2205
Location: San Jose

PostPosted: Fri May 06, 2011 10:30 pm
Reply with quote

Sqlcode,

May be you should read the requirements listed here.

www.ibmmainframes.com/viewtopic.php?p=267313#267313
Back to top
View user's profile Send private message
Craq Giegerich

Senior Member


Joined: 19 May 2007
Posts: 1512
Location: Virginia, USA

PostPosted: Fri May 06, 2011 10:38 pm
Reply with quote

Write the generated jcl to a dataset and use the TSO Submit command to submit the job to execute!
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Thu May 12, 2011 10:24 am
Reply with quote

hi Skolusu,

Thanks for solution. I am currently trying on correcting the creation of DD names in the dynamic jcl as it was throwing error:

JCP0427E DD NAME 'CT000CNTL' MUST BE 8 CHARACTERS OR LESS
JCP0427E DD NAME 'CT001CNTL' MUST BE 8 CHARACTERS OR LESS
JCP0427E DD NAME 'CT002CNTL' MUST BE 8 CHARACTERS OR LESS
JCP0427E DD NAME 'CT003CNTL' MUST BE 8 CHARACTERS OR LESS
JCP0427E DD NAME 'CT004CNTL' MUST BE 8 CHARACTERS OR LESS

Please let me know if there is a way to handle more than 3273 files. Do I need to write another dynamic jcl for that ... Please suggest.

Thanks,
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Thu May 12, 2011 11:04 am
Reply with quote

first of all the error is from Your dumb jcl checker, not from jcl
second looks like You mixed the two solutions

Kolusu solution builds ddnames and friends as CxxxCNTL with a numbering sequence of three digits

Sqlcode1 solution builds ddnames and friends as CTxxCNTL with a numbering sequence of two digits

You show <things> of the format CTxxx
so it looks like You mixed the two solutions

I did not <test> sqlcode1 solution, but since I was curios I gave Kolusu solution a quick and dirty run

Kolusu solution as given is correct ( no patronizing intended Kolusu icon_biggrin.gif )
just pointing out a possible TS misunderstanding
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Thu May 12, 2011 12:20 pm
Reply with quote

While we are back on this, remember some of the things we brough up before.

When you run, you'd like all your files to be from the same production run, but have no way to know this.

When the client gets his output, at least to start with, he's going to look at it. Then you're going to get queries. He's going to say "on this report from three weeks ago..." so make sure you can relate his copy of the report to one you can look at, and all the files.

What about periodic files?

What about starting with the "main" parts of the system so you are concentrating on the important first (if he finds problems).

How about you spend some time running with the production data before you give the client the first report, so you can check for anything "obvious" before he gets to see it.

It's not just producing the report, it is everything that goes with it.

Do you have a file archiver? When you re-run an old set of reports one time, it'll take hours to get everything back.
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Thu May 12, 2011 12:28 pm
Reply with quote

Quote:
Please let me know if there is a way to handle more than 3273 files.
I wonder who is ever going to read the report icon_biggrin.gif

anyway the 3273 dd names is a jcl constraint..
You will have to analyze a bit and submit in multiple jcls
remembering that each dataset to be counted implies two dd, that' s the reason for the STOPAFT=1000,
to sqeeze everything out of jcl You could have used STOPAFT=1500
but maybe 1000 is easier to remember

With the proper DFSORT knowledge it will be possible to build in one pass as many jobs as needed each one counting less than <somenumber> of datasets
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Thu May 12, 2011 12:51 pm
Reply with quote

kitchu84 wrote:
[...]
Please let me know if there is a way to handle more than 3273 files. [...]


I missed that. I agree with enrico. If the client gets the report, you definitely won't get the queries until three weeks later (or however long) or until he gets his Excel Macro "working" when you will get hit with everything daily. Most queries you get won't mean much to start with.

Remember, you don't know which file is from which production run. It is just likely they will be from the same run, you won't know.

How are you getting the list of files which is input for all this? Anything you can do to group them together "logically" so you can do some automatic number checking before the client can? In the end, you can give him that report as well, save you looking for errors in his Excel Macro.
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Thu May 12, 2011 10:03 pm
Reply with quote

Hi All,

We will run this job periodically every 30 mins with the last 30 mins SAR unload and we will populate the data in DB2 tables for a particular JOB ran with that particular JOB ID.

For example: if a job name = ABCXXXXX ran with job id : JOB27909,
I will take unloads from SAR for the last 30 mins (time will be controlled by parm parameter in a COBOL module) and then I will filter out the file names for all the jobs alongwith their Job ids.

This will tell me specifically which run of the job had those files and what was the count. All this information will be loaded in table alongwith a timestamp. The data from table will be pulled through JAVA code and shown on URLs so that we can run some reports to know specifically on which date for a particular job what was the file count.

@enrico - Sorry I am not clear on this part:

"remembering that each dataset to be counted implies two dd, that' s the reason for the STOPAFT=1000,
to sqeeze everything out of jcl You could have used STOPAFT=1500"

Any pointers to handle more than 3273 files would be helpful.

Thanks all for your suggestions.

Thank you,
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Thu May 12, 2011 10:47 pm
Reply with quote

Quote:
Any pointers to handle more than 3273 files would be helpful.


did You care to read completely my previous reply ?

noo way with a single job...

depending on Your skills there might be a REXX alternative!
maybe it would be wiser to wait for Kolusu so that He may suggest how to build multiple Jobs in one pass
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Fri May 13, 2011 3:40 am
Reply with quote

Yes enrico ... I understand we have limitations on number of DDs... will also try form myside and look for Skolusu's suggestions...
Back to top
View user's profile Send private message
Skolusu

Senior Member


Joined: 07 Dec 2007
Posts: 2205
Location: San Jose

PostPosted: Fri May 13, 2011 3:55 am
Reply with quote

kitchu84 wrote:
I am currently trying on correcting the creation of DD names in the dynamic jcl as it was throwing error:

JCP0427E DD NAME 'CT000CNTL' MUST BE 8 CHARACTERS OR LESS


Kitchu84,

As enrico pointed out , I was creating the JCL with CXXXCNTL and not CTXXXCNTL. You seemed to have picked Sqlcode's JCL and I can't help you fix it.

kitchu84 wrote:
Please let me know if there is a way to handle more than 3273 files. Do I need to write another dynamic jcl for that ... Please suggest.


Just because I said the DD limit is 3273 doesn't mean your shop has the same limit. The limit of 3273 is based on the number of single unit DD statements for a 64K TIOT (task input output table). This limit can be different depending on the installation-defined TIOT size. 32K is the default TIOT size. The limit for a 32K TIOT is 1635. (In a JES3 system, the installation might further reduce the limit.)


kitchu84 wrote:
Sorry I am not clear on this part:

enrico-sorichetti wrote:
remembering that each dataset to be counted implies two dd, that' s the reason for the STOPAFT=1000,to sqeeze everything out of jcl You could have used STOPAFT=1500


The Dynamic JCL being generated uses 1 dd name for the input DSN name and the other(CXXXCNTL) for writting the count in the output file. So for every record in your list file 2 ddnames are being used. If the limit is 3273 , you would only be able to process 3273/2 = 1636 dd names. Enrico rounded it to 1500 leaving some buffer for other ddnames.

However you simply can't use STOPAFT=1500 because once you cross 999 entries the seqnum becomes 4 characters and you cannot have a DDname with numeric.

Here is a JCL which will read upto a max of 26,000 dsn names and generate dynamic jobs with 1000 dsnames per job. Please don't come back and ask me how to generate a JCL for more than 26,000.

I am showing you submitting of 10 jobs however you can extend it to 26 jobs which will process a total of 26,000 dsnames. In order to do that allocate the output files in step0200 from
JCOUNT01 thru JCOUNT26 and also add the control cards

OUTFIL FNAMES=JCOUNTnn,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'A thru Z')

The JCL is generated in step0200. Look at the output from JCOUNTnn of that step. It should have generated the JCL need to count the records from each file. If the generated JCL looks good then change the following statement and resubmit the job.

Code:

//JCOUNTnn DD SYSOUT=*


to

Code:
Code:

//JCOUNTnn DD SYSOUT=(*,INTRDR)



Code:

//STEP0100 EXEC PGM=SORT                       
//SYSOUT   DD SYSOUT=*                         
//SORTIN   DD DSN=your 150 byte FB file with DSN names,DISP=SHR
//JOBC     DD DSN=&&J,DISP=(,PASS),SPACE=(CYL,(5,5),RLSE)           
//DDN      DD DSN=&&D,DISP=(,PASS),SPACE=(CYL,(5,5),RLSE)           
//TOOL     DD DSN=&&T,DISP=(,PASS),SPACE=(CYL,(10,10),RLSE)         
//CARDS    DD DSN=&&C,DISP=(,PASS),SPACE=(CYL,(10,10),RLSE)         
//SYSIN    DD *                                                     
  OPTION COPY,STOPAFT=26000                                         
  INREC IFTHEN=(WHEN=GROUP,RECORDS=1000,PUSH=(151:ID=2,SEQ=5)),     
  IFTHEN=(WHEN=INIT,OVERLAY=(153:153,5,ZD,SUB,+1,EDIT=(TTT),2X)),   
  IFTHEN=(WHEN=INIT,                                               
  FINDREP=(INOUT=(C'01',C'A',C'02',C'B',C'03',C'C',                 
                  C'04',C'D',C'05',C'E',C'06',C'F',                 
                  C'07',C'G',C'08',C'H',C'09',C'I',                 
                  C'10',C'J',C'11',C'K',C'12',C'L',                 
                  C'13',C'M',C'14',C'N',C'15',C'O',                 
                  C'16',C'P',C'17',C'Q',C'18',C'R',                 
                  C'19',C'S',C'20',C'T',C'21',C'U',                 
                  C'22',C'V',C'23',C'W',C'24',C'X',                 
                  C'25',C'Y',C'26',C'Z'),STARTPOS=151,DO=1))       
                                                                   
  OUTFIL FNAMES=DDN,                                                   
  BUILD=(C'//',151,6,12:C'DD DISP=SHR,DSN=',1,44,81:151,1)             
                                                                       
  OUTFIL FNAMES=CARDS,REMOVECC,NODETAIL,BUILD=(81X),                   
  SECTIONS=(151,4,                                                     
  HEADER3=('//',151,4,'CNTL DD *',81:151,1),                           
  TRAILER3=(03:'OUTFIL FNAMES=OUT,VTOF,',                               
               'REMOVECC,NODETAIL,BUILD=(80X),',81:151,1,/,             
            03:'TRAILER1=(''',1,44,'''',',2X,',81:151,1,/,             
            14:'COUNT=(M11,LENGTH=10))',81:151,1,/,'//*',81:151,1))     

  OUTFIL FNAMES=TOOL,                                                   
  IFTHEN=(WHEN=INIT,                                                   
  BUILD=(3:C'COPY FROM(',151,4,                                         
           C') TO(OUT) USING(',151,4,                                   
           C')',81:151,1)),                                             
  IFTHEN=(WHEN=(14,3,ZD,EQ,0),                                         
  BUILD=(C'//TOOLIN  DD *',81:81,1,/,1,81))                             
                                                                       
  OUTFIL FNAMES=JOBC,REMOVECC,NODETAIL,BUILD=(81X),               
  SECTIONS=(151,1,                                               
  HEADER3=('//COUNT',151,1,12:'JOB ''',                           
           'COUNT JOB-',151,1,'''',',',81:151,1,/,               
           '//',16:'CLASS=A,',81:151,1,/,                         
           '//',16:'MSGCLASS=H,',81:151,1,/,                     
           '//',16:'MSGLEVEL=(1,1),',81:151,1,/,                 
           '//',16:'NOTIFY=TID',81:151,1,/,                   
           '//*',81:151,1,/,                                     
           '//',58'*',81:151,1,/,                                 
           '//* DELETE THE OUTPUT COUNT DATASET IF EXISTED',59:X,
           '*',81:151,1,/,                                       
           '//',58'*',81:151,1,/,                                 
           '//STEP0050 EXEC PGM=IEFBR14',81:151,1,/,             
           '//FILE01   DD DSN=KOLUSU.FILECNT.OUT',151,1,         
           ',',81:151,1,/,                                       
           '//            DISP=(MOD,DELETE,DELETE),',81:151,1,/, 
           '//            UNIT=SYSDA,',81:151,1,/,               
           '//            SPACE=(TRK,(1,0),RLSE)',81:151,1,/,     
           '//*',81:151,1,/,                                     
           '//',58'*',81:151,1,/,                                 
           '//* COUNT THE NUMBER OF RECORDS IN EACH FILE',59:X,   
           '*',81:151,1,/,                                       
           '//',58'*',81:151,1,/,                                 
           '//STEP0100 EXEC PGM=ICETOOL',81:151,1,/,             
           '//TOOLMSG  DD SYSOUT=*',81:151,1,/,                   
           '//DFSMSG   DD SYSOUT=*',81:151,1,/,                   
           '//OUT      DD DSN=TIDXXX.FILECNT.OUT',151,1,         
           ',',81:151,1,/,                                       
           '//            DISP=(MOD,CATLG,DELETE),',81:151,1,/,   
           '//            UNIT=SYSDA,',81:151,1,/,               
           '//            SPACE=(CYL,(20,10),RLSE)',81:151,1,/,   
           '//*',81:151,1))                                       
//*                                                               
//STEP0200 EXEC PGM=SORT                                       
//SYSOUT   DD SYSOUT=*                                         
//SORTIN   DD DSN=&&J,DISP=SHR                                 
//         DD DSN=&&D,DISP=SHR                                 
//         DD DSN=&&T,DISP=SHR                                 
//         DD DSN=&&C,DISP=SHR                                 
//JCOUNT01 DD SYSOUT=*                               
//JCOUNT02 DD SYSOUT=*                               
//JCOUNT03 DD SYSOUT=*                               
//JCOUNT04 DD SYSOUT=*                               
//JCOUNT05 DD SYSOUT=*                               
//JCOUNT06 DD SYSOUT=*                               
//JCOUNT07 DD SYSOUT=*                               
//JCOUNT08 DD SYSOUT=*                               
//JCOUNT09 DD SYSOUT=*                               
//JCOUNT10 DD SYSOUT=*                               
//SYSIN    DD *                                                 
  SORT FIELDS=COPY                                             
  OUTFIL FNAMES=JCOUNT01,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'A')
  OUTFIL FNAMES=JCOUNT02,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'B')
  OUTFIL FNAMES=JCOUNT03,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'C')
  OUTFIL FNAMES=JCOUNT04,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'D')
  OUTFIL FNAMES=JCOUNT05,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'E')
  OUTFIL FNAMES=JCOUNT06,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'F')
  OUTFIL FNAMES=JCOUNT07,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'G')
  OUTFIL FNAMES=JCOUNT08,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'H')
  OUTFIL FNAMES=JCOUNT09,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'I')
  OUTFIL FNAMES=JCOUNT10,BUILD=(1,80),INCLUDE=(81,1,CH,EQ,C'J')
//*



If you're not familiar with DFSORT and DFSORT's ICETOOL, I'd suggest reading through "z/OS DFSORT: Getting Started". It's an excellent tutorial, with lots of examples, that will show you how to use DFSORT, DFSORT's ICETOOL and DFSORT Symbols. You can access it online, along with all of the other DFSORT books, from:

www.ibm.com/support/docview.wss?rs=114&uid=isg3T7000080
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Fri May 13, 2011 4:39 am
Reply with quote

I'm not sure I have followed this correctly.

A)
1) You run the DFSORT mega-file-counter about every 30 minutes
2) You check on SAR for all jobs completed in that time
3) You extract Job info from SAR, a sub-set of dataset info from the mega-file-counter
4) You load everything into DB2 so that you know which file relates to which job that ran at a particular time

or

B)
1) You check on SAR for all jobs complete in last 30 minutes
2) Create extract file-of-files for the mega-file-counter from the complete jobs in SAR
3) Run the DFSORT mega-file-counter
4) You load everything into DB2 so that you know which file relates to which job that ran at a particular time

or

C)
Something else I've missed completely

If A) is the mega-file-counter going to run in less than 30 minutes? That is only 1800 seconds, and it seems you might have more than 6500 DD's to open/close and read. + more questions if A) is confirmed

If B) why do you feel you need so many files, there will not be more than 3000 files created in a 30-minute window, will there? + more questions if B) is confirmed

If C), well, I missed it.
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Fri May 13, 2011 9:12 am
Reply with quote

Quote:
However you simply can't use STOPAFT=1500 because once you cross 999 entries the seqnum becomes 4 characters and you cannot have a DDname with numeric.


icon_redface.gif
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Fri May 13, 2011 9:22 am
Reply with quote

Quote:
I will take unloads from SAR for the last 30 mins (time will be controlled by parm parameter in a COBOL module) and then I will filter out the file names for all the jobs alongwith their Job ids.

This will tell me specifically which run of the job had those files and what was the count. All this information will be loaded in table alongwith a timestamp. The data from table will be pulled through JAVA code and shown on URLs so that we can run some reports to know specifically on which date for a particular job what was the file count.


and since there will be quite a number of read only datasets You will keep
wasting resources counting something that did not change
( as You know You cannot determine just looking at the jcl if a dataset is input or output )
an what about the dataset on tape, heck of a mount activity, every half an hour
and what about the possible gdg' s

it would be wiser for the powers of Your organization to review the whole process
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Sat May 14, 2011 2:09 am
Reply with quote

Hello Bill,

yes you are right ... its option B ...

Also, since we need to run the jobs automatically through CA7 so suppose we submitted 5 dynamic JCLS each of them creating a different file with counts. I have a challenge to merge and then use it in another Job.
Say there are 5000 file names . Hence the main job creates 5 dynamic JCLS and submits them - each of which in turn creates one output file of file counts .
Now since these dynamic JCLs are not defined in CA7, i need a way to put
dependancy of all these jobs/output files on another final job which merges the 5 output file and then uses the data... I cannot put the dependency of Main job on final job because the final job might abend
due to file not found if the dynamic jcls are still running.

Also, is there a possibility that it might take previous version of the file
into consideration?

I am sorry if this isnt the right place to ask this query.


Please guide ...
Back to top
View user's profile Send private message
Skolusu

Senior Member


Joined: 07 Dec 2007
Posts: 2205
Location: San Jose

PostPosted: Sat May 14, 2011 2:41 am
Reply with quote

kitchu84 wrote:
I have a challenge to merge and then use it in another Job.


Well I am not bill but since I was the one who provided you with the dynamic JCL solution , I am gonna answer it . You don't have a challenge. It is quite simple. change my job to have the same jobcard on all the dynamically created JCL's and no matter how many jobs you create they will simply be queued up one after another. They will run sequentially and you can use just 1 output file. You don't need another job to merge them. Before you ask, I am not going to help you with that. It is a simple change and you should be able to do that now that you have the necessary frame work.

kitchu84 wrote:
Also, is there a possibility that it might take previous version of the file into consideration?


You need to atleast read the comments in the generated JCL. The 7th line of the generated JCL will have this comment

Code:

//**********************************************************
//* DELETE THE OUTPUT COUNT DATASET IF EXISTED             *
//**********************************************************
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Sat May 14, 2011 7:06 am
Reply with quote

Skolusu wrote:

[...]
Well I am not bill
[...]


I don't think the whole thing was directed at me.

[...]

kitchu84 wrote:

We will run this job periodically every 30 mins with the last 30 mins SAR unload and we will populate the data in DB2 tables for a particular JOB ran with that particular JOB ID.

For example: if a job name = ABCXXXXX ran with job id : JOB27909,
I will take unloads from SAR for the last 30 mins (time will be controlled by parm parameter in a COBOL module) and then I will filter out the file names for all the jobs alongwith their Job ids.

This will tell me specifically which run of the job had those files and what was the count.

[...]


B)
1) You check on SAR for all jobs complete in last 30 minutes
2) Create extract file-of-files for the mega-file-counter from the complete jobs in SAR
3) Run the DFSORT mega-file-counter
4) You load everything into DB2 so that you know which file relates to which job that ran at a particular time

There is something I'm not getting.

Why I want to know more exactly how you are doing it is because there are different problems for either route. Are you running the full mega-file-counter? If so, how many times a day? Do you run it once and the SAR extract multiple times?

Some problems are the same, whichever route. Like, how do you deal with a re-run from a previous day? Are your timestamps "logical" or actual - "logical" being so that you can identify files from the same batch runs, assuming that you will be running over midnight? Etc.

Do you have something of the actual design that you can share with us?
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Sun May 15, 2011 12:03 pm
Reply with quote

Hi Skolusu,

With due respect to you - the next part in my post was intended for you. I apologise for not mentioning your name specifically.

I am getting syntax errors in this line:
INREC IFTHEN=(WHEN=GROUP,RECORDS=1000,
*

WER268A INREC STATEMENT : SYNTAX ERROR
WER211B SYNCSMF CALLED BY SYNCSORT; RC=0000
WER449I SYNCSORT GLOBAL DSM SUBSYSTEM ACTIVE

I searched in the forum and found a similar kind of issue: : ibmmainframes.com/viewtopic.php?t=51832&postdays=0&postorder=asc&start=0

This type of error suggests that its because WHEN=GROUP is not available in the current release we are using icon_sad.gif ... Could you please suggest an alternative.

Hi Bill : we will run the file counter every time after we run the SAR unloads. The SAR unloads run every 30 mins (for entire 24 hrs). Regarding the rerun, the timestamp are actual ... we are planning to load the data at that particular instant of time. So say a job runs at 12:30 pm, the SAR unload running at 1 pm will pick up that job details and we will run file counter for the job. If the job again runs at say 2:50 pm with different job id , the next SAR unload will pick up the details of the job and file counter will count the records at that instant of time which will be loaded with that particular timestamp.

Sorry if I am not clear. Please let me know ...
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Sun May 15, 2011 12:07 pm
Reply with quote

Just to add, this release of sort is invoked when i submit the job :

SYNCSORT FOR Z/OS 1.3.0.2R
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Sun May 15, 2011 12:10 pm
Reply with quote

Code:
WER268A INREC STATEMENT : SYNTAX ERROR
WER211B SYNCSMF CALLED BY SYNCSORT; RC=0000
WER449I SYNCSORT GLOBAL DSM SUBSYSTEM ACTIVE


the messages show that You are using SYNCSORT, not IBM/DFSORT

Kolusu is a IBM/DFSORT developer, You cannot expect Him to provide free consultancy on a competitor' s product icon_biggrin.gif

topic moved to the JCL forum where SYNCSORT issues are housed
Back to top
View user's profile Send private message
kitchu84

New User


Joined: 02 Dec 2006
Posts: 33
Location: chennai

PostPosted: Sun May 15, 2011 12:32 pm
Reply with quote

:'( ... will miss Skolusu's solutions ...
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Sun May 15, 2011 1:22 pm
Reply with quote

kitchu84 wrote:
[...]

Hi Bill : we will run the file counter every time after we run the SAR unloads. The SAR unloads run every 30 mins (for entire 24 hrs). Regarding the rerun, the timestamp are actual ... we are planning to load the data at that particular instant of time. So say a job runs at 12:30 pm, the SAR unload running at 1 pm will pick up that job details and we will run file counter for the job. If the job again runs at say 2:50 pm with different job id , the next SAR unload will pick up the details of the job and file counter will count the records at that instant of time which will be loaded with that particular timestamp.

Sorry if I am not clear. Please let me know ...


Hi Kitchu84,

Sorry, but again, is this the full file counter that you are talking about? No, I'm not clear.

How long is the file counter going to take to run, every 30 minutes.

If the full run is running 48 times a day (theoretically), then 47 times for each dataset it is not needed (roughly speaking).

Are you using a SAR time loaded on the Job time finished for your selection? If the latter, is there any "latency" between a Job finishing and appearing in SAR? If so, you will miss Jobs occasionally.

What DISP are your production jobs running with for output files, generally speaking? I get to this sort of point, and I think yet again I must be missing something crucial.
Back to top
View user's profile Send private message
enrico-sorichetti

Superior Member


Joined: 14 Mar 2007
Posts: 10872
Location: italy

PostPosted: Sun May 15, 2011 1:34 pm
Reply with quote

Bill, looks like we are just wasting time here, as does the TS organization by counting things every half an hour

they made up their mind/bed let them sleep in it icon_biggrin.gif
Back to top
View user's profile Send private message
Bill Woodger

Moderator Emeritus


Joined: 09 Mar 2011
Posts: 7309
Location: Inside the Matrix

PostPosted: Sun May 15, 2011 2:01 pm
Reply with quote

enrico-sorichetti wrote:
Bill, looks like we are just wasting time here, as does the TS organization by counting things every half an hour

they made up their mind/bed let them sleep in it icon_biggrin.gif


enrico, I get that feeling as well. Extracting job/file information from one source, but not using that to trigger the counts? So, mistmatched timings for sure. Won't run inside 24 hours, won't get all the jobs updated, will lock production jobs, doesn't know about business days, mixes re-runs with current data. Etc. And uses the wrong sort package...
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> JCL & VSAM Goto page Previous  1, 2, 3  Next

 


Similar Topics
Topic Forum Replies
No new posts INCLUDE OMIT COND for Multiple values... DFSORT/ICETOOL 5
No new posts To get the count of rows for every 1 ... DB2 3
No new posts Write line by line from two files DFSORT/ICETOOL 7
No new posts Compare only first records of the fil... SYNCSORT 7
No new posts Replace Multiple Field values to Othe... DFSORT/ICETOOL 12
Search our Forums:

Back to Top