|
View previous topic :: View next topic
|
| Author |
Message |
padma_prakasam
New User
Joined: 06 Oct 2005 Posts: 31
|
|
|
|
Hi,
I'm having a file containing some records with duplicates. I need to write a JCL to write unique records in one output file and duplicates in another output file. This has to be done through JCL alone. How to code Jcl for this????
Thanks
padma. |
|
| Back to top |
|
 |
crm
New User

Joined: 14 Nov 2005 Posts: 25
|
|
|
|
hai padma
u can do this by using SUMFIELDS=NONE
Here is the JCL code:
//JOB1 JOB MAC,123,CLASS=A,MSGCLASS=X
//STEP1 EXEC PGM=SORT
//SYSOUT DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SORTIN01 DD DSN=FILE1,DISP=SHR
//SORTIN02 DD DSN=FILE2,DISP=SHR
//SORTIN03 DD DSN=FILE3,DISP=SHR
//OUTFILE DD DSN=IBM.MFS,DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// SPACE=(CYL,(2,1),RLSE),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=800)
//SYSIN DD *
SUMFIELDS = NONE
/*
//
LET ME GIVE EXAMPLE;
Let FILE1 have
ram,sam,mac
Let FILE2 have
paul,ram
Let FILE3 have
sam,paul,mac
then OUTFILE wil give
ram,sam,mac,paul
if there are any mistakes in my answer let me know |
|
| Back to top |
|
 |
vicky10001 Warnings : 1 Active User
.jpg)
Joined: 13 Jul 2005 Posts: 136
|
|
|
|
Hi,
Please use this JCl
.if anything wrong please let me know
//STEP1 EXEC PGM=SORT
//SORTIN DD DSN=INPUT Dataset,DISP=SHR
//SORTOF01 DD DSN=OUTPUT Dataset,
// DCB=(BLKSIZE=0,RECFM=FB),
// SPACE=(CYL,(50,25),RLSE),UNIT=SYSDA,
// DISP=(NEW,CATLG,DELETE)
//SYSOUT DD SYSOUT=*
//SYUDUMP DD SYSOUT=*
//SYSIN DD *
SORT FIELDS=(9,4,BI,A) /*Sort Fields
SUM FIELDS=NONE
/* |
|
| Back to top |
|
 |
padma_prakasam
New User
Joined: 06 Oct 2005 Posts: 31
|
|
|
|
Hi,
My requirement is to splitt a file having duplicates such that one output file should contain unique records and the other one contains duplicate records. |
|
| Back to top |
|
 |
reshma
New User
Joined: 10 Sep 2005 Posts: 9 Location: software engineer
|
|
|
|
padma,
vicky has send u a correct solution for ur requirement..gothru it correctly |
|
| Back to top |
|
 |
padma_prakasam
New User
Joined: 06 Oct 2005 Posts: 31
|
|
|
|
Hi,
The jcl which Mr.vicky has sent will retrieve unique records from the input file. That's fine. along with that i need one more output file which contains duplicated records that has been omitted in first output file.
suppose let the i/p file contents are like this
A,B,C,D,A,E,F,B (A,B apperars twice)
O/P 1 should be A,B,C,D,E,F
O/P 2 should be A,B |
|
| Back to top |
|
 |
guptae
Moderator

Joined: 14 Oct 2005 Posts: 1209 Location: Bangalore,India
|
|
| Back to top |
|
 |
stly Warnings : 1 New User

Joined: 25 Jul 2005 Posts: 93
|
|
|
|
hi padma use sumfields=none,xsum
in the jcl there must be a ddname as xsum so that the duplicate records will move to that dataset.
plesae correct if i m wrong |
|
| Back to top |
|
 |
vicky10001 Warnings : 1 Active User
.jpg)
Joined: 13 Jul 2005 Posts: 136
|
|
|
|
Please use this jcl
//SORTXSUM DD DSN=&HQUAL..O4XS4787,
DISP=(NEW,CATLG,DELETE),
DCB=(LRECL=19,BLKSIZE=0,RECFM=FB),
SPACE=(CYL,(50,25),RLSE),UNIT=SYSDA
//SYSIN DD * SUM FIELDS=NONE,XSUM
//SORTWK01 DD SPACE=&SRTSZ.,UNIT=&SRTWK1
//SORTWK02 DD SPACE=&SRTSZ.,UNIT=&SRTWK2
//SORTWK03 DD SPACE=&SRTSZ.,UNIT=&SRTWK1
//SORTWK04 DD SPACE=&SRTSZ.,UNIT=&SRTWK2
If I wrong.Please let me know |
|
| Back to top |
|
 |
Frank Yaeger
DFSORT Developer

Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
|
|
|
|
Padma,
Here's a DFSORT/ICETOOL job that will do what you asked for:
| Code: |
//S1 EXEC PGM=ICETOOL
//TOOLMSG DD SYSOUT=*
//DFSMSG DD SYSOUT=*
//IN DD *
A
B
C
D
A
E
F
B
/*
//FIRST DD DSN=... output file1
//REST DD DSN=... output file2
//TOOLIN DD *
SELECT FROM(IN) TO(FIRST) ON(1,1,CH) FIRST DISCARD(REST)
/*
|
FIRST will have:
REST will have:
If you're not familiar with DFSORT and DFSORT's ICETOOL, I'd suggest reading through "z/OS DFSORT: Getting Started". It's an excellent tutorial, with lots of examples, that will show you how to use DFSORT, DFSORT's ICETOOL and DFSORT Symbols. You can access it online, along with all of the other DFSORT books, from:
Use [URL] BBCode for External Links |
|
| Back to top |
|
 |
|
|
 |
All times are GMT + 6 Hours |
|