In the above file, the IMS records (i.e., group of segments from SEG01 to SEG04) has to be written to output file When and ever there is a code '46' in segment SEG04 at position 10. So my output would look like below
Output:-
Code:
----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----
********************************* Top of Data *********************************
SEG01 ABCDE ....
SEG02 12345 ...
SEG04 46 ...
SEG04 50 ...
SEG01 AAAAA ....
SEG02 11111 ...
SEG04 46 ...
The order of IMS records (i.e., from SEG01 to SEG04) shouldn't be changed while writing to output file.
I have written the below code inICETOOL to implement the above requirement
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
1. 17,791,669 records of 1,999 bytes each is over 35 billion bytes, or about 12 3390 mode 3 volumes. Have you discussed this with your site support group? There may not be 10 volumes worth of space available for you to put this file on.
2. VOL=(,,,95) is pretty much useless -- SMS managed data sets are limited to 59 volumes, not 95, and if the data set is not SMS managed the volume count is taken from the UNIT parameter for non-specific disk requests, not the VOL parameter.
3. If you have not talked to your site support group, DO SO NOW! They should have been your FIRST contact and this forum should be your last choice.
first of all I dont understand why you have
SEG04 50 ...
in output file?
Secondly, looks you have opted way complicated way than it should be...
Tell us your ptf level using below thread from Frank
ibmmainframes.com/viewtopic.php?t=33389
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
Quote:
2. VOL=(,,,95) is pretty much useless -- SMS managed data sets are limited to 59 volumes, not 95, and if the data set is not SMS managed the volume count is taken from the UNIT parameter for non-specific disk requests, not the VOL parameter.
Actually, it's more useless than that. DFSORT does NOT support multivolume work data sets. It will only use the first volume. But you can have up to 255 single volume work data sets.
first of all I dont understand why you have
SEG04 50 ...
in output file?
As one of the SEG04 contains code '46' at position 10, the IMS records (i.e., group of segments from SEG01 to SEG04) are written to the output file (including SEG04 50 as this is also in the group)..
And here is the PTF details, which u have asked
Code:
ICE201I H RECORD TYPE
I will try by splitting the file into two in the meanwhile... I can perform above task using SAS also, but want to learn DFSORT.. Hence trying to solve thru DFSORT
Joined: 28 Jul 2006 Posts: 1702 Location: Australia
Hi,
Code:
ICE046A
ICE046A SORT CAPACITY EXCEEDED - RECORD COUNT: n
Explanation: Critical. DFSORT was not able to complete processing with
the intermediate storage available (Hiperspace or disk work data sets).
For work data sets with secondary allocation allowed, DFSORT overrides
system B37 abends and continues processing; this message is issued only
when no more space is available in Hiperspace or on any allocated work
data set.
Note: DFSORT uses only the first volume of multi-volume work data sets.
The count n is either an approximation of the number of records or is the
total number of records that DFSORT was able to read in before it used all
of the available intermediate storage.
rajesh1183,
You are changing your VB input file to FB. Is there a reason you do that or you haven't noticed this?
Your job sort cards needs to be rewritten for efficiency but as far as your sort capacity error message is concerned....
My 2 cents, try to remove all of the SORTWK** and add below lines in your sort job. If this doesn't work try to increase to 16/32 and see if this works.
You are changing your VB input file to FB. Is there a reason you do that or you haven't noticed this?
You still did not answer this question but I am going to assume that you want to retain LRECL and RECFM of the input file but just want to omit set of records satisfying your conditions.
Your sample input data and sort card you have shown doesn't match. I don't see any "ZPO01" records in your sample input. If your input file is RECFM=VB, LRECL=32756,BLKSIZE=32760, by using PUSH at 2000 position you are loosing/overwritting some of the data. Also, by using push at the end/middle of the record, you are ruining the very purpose of VB file.
Do you really care for only first 1999 bytes from this input file? If that is the case you may be able to use GROUP function and won't have to use JOINKEYS.
Last try below job to see if this works... Since your input LRECL is VB and 32756, you are playing very close to DFSort's boundary conditions but you may benefit from JOINKEY.