I have a FB file with 200 bytes length with say 100 records. I have to split the into many.
I/P file:
col1
51
51
51
52
52
53
54
54
55
o/p file1:
51
51
51
o/p file2:
52
52
o/p file3:
53
o/p file 4:
54
54
o/p file 5:
55
It is not necessary that always there would be 5 output files. it depends on the number of distinct records in the i/p file.
Could some one help me with this?
of the 200 bytes, which constitute the same key? first 2 bytes?
by the way,
the number of distinct records in a file
is equal to
the number of records in the file.
Yes, the first 2 bytes is the key. What I meant by distinct records is with respect to the first 2 bytes only.
I am sorry for the confusion. In my case as explained in my first post, there should be 5 files written as we have 5 distinct col1 values.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
If you can't submit a job dynamically, then you can't allocate only the ddnames you need dynamically, so the best you can do is have DD statements for OUT00 to OUT99 hardcoded. You can use a DFSORT job like this, but you will end up with empty data sets (e.g. if there's no record with a key of 08, OUT08 will be empty).
It does, but just got to know that the definition of the key is 9(09) and not restricted to only 2bytes.
Also, we should not be having any empty datasets as outputs as they are FTPd to UNIX and they dont want empty files!! - this i guess can be handles through IDCAMS/DFSORT i suppose.
So, with this new requirement I am not sure how to proceed.
I suppose, this should be handled outside mainframes!
Please advice.
Joined: 18 Jul 2007 Posts: 2146 Location: At my coffee table
[humor]
Dang PC bigots.....
MF ASM can do dang near anything you think your glorious small box can do, and with reliability and security unheard of on your infernal platform.....
[/humor]
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
9(09)
So the range can be 000000000 to 999999999?
To do it with that range and to not have empty data sets, you would have to be able to dynamically create a job and submit it to the internal reader.
I could probably show you a way to create the 100 data sets, some of which could be empty. But it's quitting time here, so if you want me to show you that tomorrow, let me know. However, I don't know how you'd get rid of the empty data sets in a way that would work for you.
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hello,
Quote:
I am not sure how to get this done in mainframes!!
To repeat:
Quote:
What might you do "outside mainframes" that cannot be accomplished on the mainframe?
Or do you simply mean that you don't know how to do this regardless of the environment? If you know an acceptable way to accomplish this "outside mainframes" post that solution and possibly someone can post a mainframe alternative.
If your organization prefers scheduled jobs be run rather than via the internal reader, you could create the specific jcl (i'd use a PROCedure) needed to write the files that have data. The jcl creation job and the data creation jobs would be defined as predecessor/successor jobs.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
Madhu,
Assuming RECFM=FB and LRECL=200 (as stated earlier) and the key is in positions 1-9, here's a DFSORT job that will split the input file into output files. However, there may be empty files.
OUT00 would have the 000000001 records, OUT01 would have the 000000002 records and OUT02 would have the 999999999 records. DFSORT would not write to any of the other OUTnn files.
Madhu,
Unless you really really want this be accomplished via ICETOOl or DFSort, why can't you use Dynamic Allocation via Cobol. You can even add some logic in cobol code to check for empty files.
BPXWDYN is IBM link.
For PUTENV I have an external link and I am not sure but if I am allowed to post it here but there is plenty of material and sample code available on internet.