View previous topic :: View next topic
|
Author |
Message |
rexx77
New User
Joined: 14 Apr 2008 Posts: 78 Location: Mysore
|
|
|
|
Hi,
I had a requirement to find specific sets of strings and extract matching records, I have done this using simple Sort card.
Now the scope is expanded to include old datasets to find and extract matching records for the given strings. I can give all 15 or 20 old datasets in SORTIN and can execute the SORT to extract the matching records. But I need to know from which file, this record was extracted, Is there is anyway I can write the properties of the dataset into spool or to some dataset when a particular string is found in that file?
Appreciate any help on this. Thanks. |
|
Back to top |
|
|
Terry Heinze
JCL Moderator
Joined: 14 Jul 2008 Posts: 1249 Location: Richfield, MN, USA
|
|
|
|
If you change from SORT to PGM=ISRSUPC, that utility will tell you which PDS the string was found in: CONCAT#(1), CONCAT#(2), CONCAT#(3), ... CONCAT#(n). |
|
Back to top |
|
|
Nic Clouston
Global Moderator
Joined: 10 May 2007 Posts: 2455 Location: Hampshire, UK
|
|
|
|
Why not just execute the sort for each dataset instead of concatenating them? |
|
Back to top |
|
|
rexx77
New User
Joined: 14 Apr 2008 Posts: 78 Location: Mysore
|
|
|
|
@ Nic - I wish to reduce the time taken to submit each job to process file one by one. This will be my last resort if nothing works out.
If any one else has any other idea, please do share. |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3048 Location: NYC,USA
|
|
|
|
For 15-20 files it wouldn't take more than 2-2 hours to prepare your report if you do it each one by one as also advised by Nic.
However, if the datasets are more in number then you need to have programs in place to create the cards dynamically and then execute which is like spending more time than doing each one at a time.
In short write a cobol program and create a job use INTRDR to submit a jcl. |
|
Back to top |
|
|
RahulG31
Active User
Joined: 20 Dec 2014 Posts: 446 Location: USA
|
|
|
|
I am a little surprised here. What is stopping you to use ISRSUPC as mentioned by Terry? This utility is specific for this purpose. Why would you want to do this with SORT?
As an output of ISRSUPC, you get the file numbers, the file names and the records containing the string. What more do you need?
If you are concerned with formatting, You should first use ISRSUPC and then format with SORT according to your need.
. |
|
Back to top |
|
|
Nic Clouston
Global Moderator
Joined: 10 May 2007 Posts: 2455 Location: Hampshire, UK
|
|
|
|
Why not one job with 15-20 steps. It would take less than 10 minutes to set up. The ISPF editor is good for this sort of thing - create the job card (copy from another job), create step 1, repeat step one n times, edit the dataset names in steps 2 - n. Submit. Done. |
|
Back to top |
|
|
rexx77
New User
Joined: 14 Apr 2008 Posts: 78 Location: Mysore
|
|
|
|
@RahulG31 - I have other rules which will not be possible with ISRSUPC utility, hence I felt SORT would be better option.
@Nic - the file count 15 to 20 I gave was a rough sketch, basically I need to run it against GDG base which may contain more files than that. Anyhow, I planned to do it in batch, say for every 20 datasets create a job.
Thanks folks for all the ideas. Appreciate it. |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3048 Location: NYC,USA
|
|
|
|
Quote: |
the file count 15 to 20 I gave was a rough sketch, basically I need to run it against GDG base which may contain more files than that. |
Do you not know ho many generations you retain, We roll off after 30 generations? or you can add one step which copies everyday data to a new GDG and by this you don't even have to worry for all this procedure, all you got to do is to look into that new generation. |
|
Back to top |
|
|
|