IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Finding a File Containing a Particular Text String


IBM Mainframe Forums -> DFSORT/ICETOOL
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
rexx77

New User


Joined: 14 Apr 2008
Posts: 78
Location: Mysore

PostPosted: Tue Dec 22, 2015 2:19 am
Reply with quote

Hi,

I had a requirement to find specific sets of strings and extract matching records, I have done this using simple Sort card.

Now the scope is expanded to include old datasets to find and extract matching records for the given strings. I can give all 15 or 20 old datasets in SORTIN and can execute the SORT to extract the matching records. But I need to know from which file, this record was extracted, Is there is anyway I can write the properties of the dataset into spool or to some dataset when a particular string is found in that file?

Appreciate any help on this. Thanks.
Back to top
View user's profile Send private message
Terry Heinze

JCL Moderator


Joined: 14 Jul 2008
Posts: 1249
Location: Richfield, MN, USA

PostPosted: Tue Dec 22, 2015 2:35 am
Reply with quote

If you change from SORT to PGM=ISRSUPC, that utility will tell you which PDS the string was found in: CONCAT#(1), CONCAT#(2), CONCAT#(3), ... CONCAT#(n).
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2455
Location: Hampshire, UK

PostPosted: Tue Dec 22, 2015 3:32 am
Reply with quote

Why not just execute the sort for each dataset instead of concatenating them?
Back to top
View user's profile Send private message
rexx77

New User


Joined: 14 Apr 2008
Posts: 78
Location: Mysore

PostPosted: Tue Dec 22, 2015 9:53 pm
Reply with quote

@ Nic - I wish to reduce the time taken to submit each job to process file one by one. This will be my last resort if nothing works out.

If any one else has any other idea, please do share.
Back to top
View user's profile Send private message
Rohit Umarjikar

Global Moderator


Joined: 21 Sep 2010
Posts: 3048
Location: NYC,USA

PostPosted: Tue Dec 22, 2015 10:44 pm
Reply with quote

For 15-20 files it wouldn't take more than 2-2 hours to prepare your report if you do it each one by one as also advised by Nic.
However, if the datasets are more in number then you need to have programs in place to create the cards dynamically and then execute which is like spending more time than doing each one at a time.
In short write a cobol program and create a job use INTRDR to submit a jcl.
Back to top
View user's profile Send private message
RahulG31

Active User


Joined: 20 Dec 2014
Posts: 446
Location: USA

PostPosted: Tue Dec 22, 2015 11:19 pm
Reply with quote

I am a little surprised here. What is stopping you to use ISRSUPC as mentioned by Terry? This utility is specific for this purpose. Why would you want to do this with SORT?

As an output of ISRSUPC, you get the file numbers, the file names and the records containing the string. What more do you need?

If you are concerned with formatting, You should first use ISRSUPC and then format with SORT according to your need.

.
Back to top
View user's profile Send private message
Nic Clouston

Global Moderator


Joined: 10 May 2007
Posts: 2455
Location: Hampshire, UK

PostPosted: Wed Dec 23, 2015 5:18 am
Reply with quote

Why not one job with 15-20 steps. It would take less than 10 minutes to set up. The ISPF editor is good for this sort of thing - create the job card (copy from another job), create step 1, repeat step one n times, edit the dataset names in steps 2 - n. Submit. Done.
Back to top
View user's profile Send private message
rexx77

New User


Joined: 14 Apr 2008
Posts: 78
Location: Mysore

PostPosted: Wed Dec 23, 2015 10:40 pm
Reply with quote

@RahulG31 - I have other rules which will not be possible with ISRSUPC utility, hence I felt SORT would be better option.

@Nic - the file count 15 to 20 I gave was a rough sketch, basically I need to run it against GDG base which may contain more files than that. Anyhow, I planned to do it in batch, say for every 20 datasets create a job.

Thanks folks for all the ideas. Appreciate it.
Back to top
View user's profile Send private message
Rohit Umarjikar

Global Moderator


Joined: 21 Sep 2010
Posts: 3048
Location: NYC,USA

PostPosted: Wed Dec 23, 2015 10:50 pm
Reply with quote

Quote:
the file count 15 to 20 I gave was a rough sketch, basically I need to run it against GDG base which may contain more files than that.

Do you not know ho many generations you retain, We roll off after 30 generations? or you can add one step which copies everyday data to a new GDG and by this you don't even have to worry for all this procedure, all you got to do is to look into that new generation.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DFSORT/ICETOOL

 


Similar Topics
Topic Forum Replies
No new posts PARSE Syntax for not fix length word ... JCL & VSAM 7
No new posts Extracting Variable decimal numbers f... DFSORT/ICETOOL 17
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
No new posts Access to non cataloged VSAM file JCL & VSAM 18
No new posts Sortjoin and Search for a String and ... DFSORT/ICETOOL 1
Search our Forums:

Back to Top