How can I remove the duplicates from a file. I know using sort with SUM FIELDS=NONE can be used to remove the duplicates but suppose if a file contains 5 duplicate records and i am using SORT utility with sum fields=none then it is removing 4 records and one record is left out of those five . I don't want any of those five records at all.
ICE143I 0 BLOCKSET COPY TECHNIQUE SELECTED
ICE250I 0 VISIT http://www.ibm.com/storage/dfsort FOR DFSORT PAPERS, EXAMPLES AN
ICE000I 1 - CONTROL STATEMENTS FOR 5694-A01, Z/OS DFSORT V1R10 - 08:40 ON WED MA
SORT FIELDS=COPY
ICE201I F RECORD TYPE IS F - DATA STARTS IN POSITION 1
This is the screenshot of the spool. Hope it helps now.
********************************* Top of Data **********************************
15
99
******************************** Bottom of Data ********************************
Is there a reason as to why you have the DCB parameters hard coded in your job? SORT automatically calculates DCB from the input file or the control cards (inrec/outrec)
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
Skolusu wrote:
expat,
Is there a reason as to why you have the DCB parameters hard coded in your job? SORT automatically calculates DCB from the input file or the control cards (inrec/outrec)
Just force of habit.
I do not get too many opportunities to use much of the DFSORT functionality and just code it up like I would most jobs.
Thanks for the updated code. After seeing your code and after a quick search in my archive library it appears I did actually have an example of the SELECT statement which included the USING parameter