of the 200 bytes, which constitute the same key? first 2 bytes?
by the way,
the number of distinct records in a file
is equal to
the number of records in the file.
Yes, the first 2 bytes is the key. What I meant by distinct records is with respect to the first 2 bytes only.
I am sorry for the confusion. In my case as explained in my first post, there should be 5 files written as we have 5 distinct col1 values.
If you can't submit a job dynamically, then you can't allocate only the ddnames you need dynamically, so the best you can do is have DD statements for OUT00 to OUT99 hardcoded. You can use a DFSORT job like this, but you will end up with empty data sets (e.g. if there's no record with a key of 08, OUT08 will be empty).
It does, but just got to know that the definition of the key is 9(09) and not restricted to only 2bytes.
Also, we should not be having any empty datasets as outputs as they are FTPd to UNIX and they dont want empty files!! - this i guess can be handles through IDCAMS/DFSORT i suppose.
So, with this new requirement I am not sure how to proceed.
I suppose, this should be handled outside mainframes!
To do it with that range and to not have empty data sets, you would have to be able to dynamically create a job and submit it to the internal reader.
I could probably show you a way to create the 100 data sets, some of which could be empty. But it's quitting time here, so if you want me to show you that tomorrow, let me know. However, I don't know how you'd get rid of the empty data sets in a way that would work for you.
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
I am not sure how to get this done in mainframes!!
What might you do "outside mainframes" that cannot be accomplished on the mainframe?
Or do you simply mean that you don't know how to do this regardless of the environment? If you know an acceptable way to accomplish this "outside mainframes" post that solution and possibly someone can post a mainframe alternative.
If your organization prefers scheduled jobs be run rather than via the internal reader, you could create the specific jcl (i'd use a PROCedure) needed to write the files that have data. The jcl creation job and the data creation jobs would be defined as predecessor/successor jobs.