Krishnapriya
New User
Joined: 05 Mar 2007 Posts: 1 Location: kerala
|
|
|
|
To sort the file with
SORT FIELDS=(2,13,CH,A,617,26,CH,D) and get only one record based on 2,13,CH,A, as output..
i.e if we have 3 rows in the file with 2,13 as
'XXXXXXXXXXXXX' ' time1',
'XXXXXXXXXXXXX' ' time1',
'XXXXXXXXXXXXX' ' time2',
the output should have
'XXXXXXXXXXXXX' and 'time2'.
We need this in one step as we cannot afford multiple sort steps with huge files.
As of now for this sort we have used two steps with the below sort cards.
SORT FIELDS=(2,13,CH,A,617,26,CH,D)
OUTFIL OUTREC=(1:1,751)
SUM FIELDS=NONE
SORT FIELDS=(2,13,CH,A)
SUM FIELDS=NONE
I need to acheive this using a single sort step that uses a single sort card rather than the two differnt sort card |
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Krishnapriya,
Welcome to the forums. I guess you want to do this using the SORT utility installed in your shop - DFSORT/SyncSort etc . Please be aware that SYNCSORT related questions are discussed in the JCL forum and the DFSORT related queries in the DFSORT part of this forum. If you mention which product you are having, it'll help the moderators to move this topic to the appropriate forum. |
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
We need this in one step as we cannot afford multiple sort steps with huge files. |
The number of steps is not the performance issue - the number of passes of the data is the issue. There are many posted solutions using a single step, but multiple passes of the data (which is what you actually need to avoid if possible).
What percent of the records are discarded as duplicatess in the first pass you posted? It may be that the second pass of the "dups removed" data is not so large. . . |
|