View previous topic :: View next topic
|
Author |
Message |
shankarm
Active User
Joined: 17 May 2010 Posts: 175 Location: India
|
|
|
|
Hi,
I am looking for a batch process which will get me the number of records and LRECL of files.
I have 6000 files approximately. The files can be tape or DASD/ VB or FB. My requirement is to find the LRECL and record count. Is it possible in ICETOOL? please advise. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
shankarm,
Unless you have changed jobs recently you have SyncSort. SyncTool is probably aliased at your site to ICETOOL. If you get ICE messages rather than SIT messages from an ICETOOL step, let me know and I'll move it back to DFSORT. For now it is taking a hike to JCL... |
|
Back to top |
|
|
shankarm
Active User
Joined: 17 May 2010 Posts: 175 Location: India
|
|
|
|
We do have syncsort. is it possible in syncsort? |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
You have told us how you have decided to service a requirement, but what is the actual requirement. The LRECL of a VB just tells you how big the largest record can be. No record on the file need be that size.
Just counting records on the files gives you what? What is going to be done with the answers?
ICETOOL/SYNCTOOL has a COUNT operator. Counting records in itself is no problem. There is, however, no magic which counts 6.000 datasets for you. You need JCL and Control Cards for all 6,000.
If you can explain a bit more we can make some suggestions. |
|
Back to top |
|
|
shankarm
Active User
Joined: 17 May 2010 Posts: 175 Location: India
|
|
|
|
We need the number of records in these files for an estimation (Mainframe migration project).
I understand that we have count operator. Do we need seperate JCL's for these? 6000 jcl's? |
|
Back to top |
|
|
Akatsukami
Global Moderator
Joined: 03 Oct 2009 Posts: 1788 Location: Bloomington, IL
|
|
|
|
shankarm wrote: |
We need the number of records in these files for an estimation (Mainframe migration project).
I understand that we have count operator. Do we need seperate JCL's for these? 6000 jcl's? |
A standard confusion among software engineers.
You'll need 6,000 steps. A job can have up to 255 steps in it (although I wouldn't try to shoehorn the maximum steps into every job). A PS data set or member of a PDS containing JCL can have multiple jobs in it. |
|
Back to top |
|
|
shankarm
Active User
Joined: 17 May 2010 Posts: 175 Location: India
|
|
|
|
got it..
I will strt creating JCL's..
Just a thought:
So syncsort does not allow us to use multiple count() in one step?
Example:-
Count(sortin1)
Count(sortin2)
Count(sortin3).. Etc..
Where sortin is the DD name.. Do we have a way? I didnt see it in the syntax though.. Just wanted to cofirm with you guys as i might have missed.. |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10873 Location: italy
|
|
|
|
why not process the dcollect and the listcat data ?
for an estimate the used % should be a fair approximation |
|
Back to top |
|
|
Akatsukami
Global Moderator
Joined: 03 Oct 2009 Posts: 1788 Location: Bloomington, IL
|
|
|
|
shankarm wrote: |
got it..
I will strt creating JCL's.. |
If you have or can easily get a list of data sets, it might be quicker and easier to write a programette -- Rexx, COBOL, or even another Syncsort job -- to generate JCL with those 6,000 Synctool steps. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Yes, it can be done like you outline with SyncTool. SyncSort provide no documentation for SyncTool so you have to show initiative.
I don't know what the maximum number of COUNT operators you can have in a single step is, but if nothing else it is going to be limited by the size of your TIOT. I would imagine you can get 1500 DDs into a single JOB, so you could do it in four shots. Almost.
The thing is, if you cram everything together you're going to have to do something to tie DSN/LRECL and COUNT together.
If you do them individually, 6000 steps, it'll be easier to grasp any one of them - but again you'll have a problem with 6000 seperate JOBs to look at, particularly is someone wants some printed results :-)
If you do 4 * 1500 DSNs, you can look to "automate" the collation of the details by extracting from the spool output and "parsing". |
|
Back to top |
|
|
shankarm
Active User
Joined: 17 May 2010 Posts: 175 Location: India
|
|
|
|
Great. This is what i wanted to hear. I will try the options..
Quote: |
Yes, it can be done like you outline with SyncTool. |
|
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 580 Location: London
|
|
|
|
Are these all DASD based files? If they are, are some migrated (archived)? Are some TAPE based?
This needs some consideration because it could impact DFHSM, or whatever archiving tool you have.
It could also impact your TAPE subsystem.
A job with ~1500 steps, depending on the files sizes and the media they're on, could be VERY long running, and may require a TIME= parameter on the jobcard otherwise it may abend with S322.
It would be worth sorting out the list of datasets by certain criteria and setting up jobs accordingly.
e.g.
- Have one or more jobs specifically for TAPE files.
- Have one or more jobs for MIGRATED (archived) files, which in turn are sorted into date ranges. For example, start with the most recent files so they're processed before they get migrated, and work back to the oldest files. This will help streamline the recalls.
- Possibly split the jobs for DASD based files according to size, with a job for large files and one for smaller files. |
|
Back to top |
|
|
|