View previous topic :: View next topic
|
Author |
Message |
jz1b0c
Active User
Joined: 25 Jan 2004 Posts: 160 Location: Toronto, Canada
|
|
|
|
Hi All,
My requirement is,
File A- daily transaction file /* VB sequential
File M- Master file /* VB sequential
Compare fields (1, 13, ch).
Every day I have to merge the File A to File M.
IF RECORD1 Is already present in File M
and if its coming in the input (File A), then it should not be included in the output (File M).
Basically FIle M (Master file should not have any duplicates).
Question of Sorting master file with SUM Fileds=None is not advised.
No operations on FILE M. only at the time of merge it should avoid duplicates.
Cany Anyone suggest some sort/Merge features to do this?
I believe REPRO is a good candidate here, any syntax ? |
|
Back to top |
|
|
shivashunmugam Muthu
Active User
Joined: 22 Jul 2005 Posts: 114 Location: Chennai
|
|
|
|
Hi
I dont think you could use Utilities for this. If you Use Repro, it will jus append the record at the end. How you will eliminate duplicates?
My suggestion is write COBOL compare logic with these two files (Compare key as you mentioned)
Best Regards,
Shiva |
|
Back to top |
|
|
Frank Yaeger
DFSORT Developer
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
|
|
|
|
Quote: |
Question of Sorting master file with SUM Fileds=None is not advised. |
If the records in the two input files are already in sorted order, you can use MERGE and SUM FIELDS=NONE. |
|
Back to top |
|
|
jz1b0c
Active User
Joined: 25 Jan 2004 Posts: 160 Location: Toronto, Canada
|
|
|
|
Hi Frank,
MERGE, Sum fileds=none, doesnot eliminate duplicates being written to master file,
it only filters the duplicates in input (transaction) file.
please share an example with us, if you have a working one |
|
Back to top |
|
|
Frank Yaeger
DFSORT Developer
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
|
|
|
|
Quote: |
MERGE, Sum fileds=none, doesnot eliminate duplicates being written to master file,
it only filters the duplicates in input (transaction) file.
please share an example with us, if you have a working one |
Well, I had a feeling I didn't understand your requirement, and I guess the feeling was right. Please show an example of the input records in each file and the expected output records and explain the "rules" in terms of the example. Also, what is the LRECL of each input file. |
|
Back to top |
|
|
Rupesh.Kothari
Member of the Month
Joined: 27 Apr 2005 Posts: 463
|
|
|
|
Hi,
Try following Code.
It eliminates the duplicates in Master file
//SORTTST JOB (ACCT#),'SORT ',NOTIFY=&SYSUID,CLASS=T,MSGCLASS=X
//STEP01 EXEC PGM=SORT
//SORTIN DD DSN=M,DISP=SHR
// DD DSN=A,DISP=SHR
//SORTOUT DD DSN=M,DISP=SHR
//SYSIN DD *
SORT FIELDS=(1,13,CH,A)
SUM FIELDS=NONE
/*
//SYSOUT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
Regards
Rupesh Kothari |
|
Back to top |
|
|
somasundaran_k
Active User
Joined: 03 Jun 2003 Posts: 134
|
|
|
|
Masade
Some thoughts. I think you can use DFSORT-ICETOOL for this.
1. Since you were mentioned you do not want perform any operation other than merge on Master File ,copy the master file to a temporary file. Something like
COPY FROM(IN1) TO(T1)
2.Assuming the transaction file may have duplicates ,remove the duplicates.
3.Using SPLICE compare the temporary master file and the unique transaction file and get the unmatched records from the Transaction file.
4.Merge/append the unmatched records to the master file.
Check this DFSORT trick here, which may be helpful(but it's using FB files)
www-1.ibm.com/servers/storage/support/software/sort/mvs/tricks/srtmst02.html#t05
Regds
-Som |
|
Back to top |
|
|
jz1b0c
Active User
Joined: 25 Jan 2004 Posts: 160 Location: Toronto, Canada
|
|
|
|
Thanx guys for your replies..
Here is the requirement in example.
Daily file: cust1 date1
cust2 date1
cust3 date1
......
cust30 date1
Masterfile: cust1 date2
cust3 date3
cust9 daten
cust32 datey.
....
cust50 datex
so now cust1 and cust3 are already present in Masterfile,
so when I merge, these records (cust1 and cust3) should not be written to master (since master file already has those records).
My daily file will have more than a million records, and master file is a monthly consolidated one, it will have around 40million records..
one way is first merge both the files to a temporary file and then apply somefileds=none and recreate the master file, But I cannot do that..
since I am not authorized to delete and I am not supposed to do so...
Writting a cobol program to append the master the file is ruled out.. |
|
Back to top |
|
|
thanooz
New User
Joined: 28 Jun 2005 Posts: 99
|
|
|
|
Hi Masade,
You can do one thing.First sort those two files based on customer field.
the use cobol program open master file in i-o mode and access is random
and use custmer id is key then try it.If that is vsam ksds.
if any thing wrong,Correct me.
thanks,
thanooz. |
|
Back to top |
|
|
Frank Yaeger
DFSORT Developer
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
|
|
|
|
Assuming that the records in each file are already sorted as shown in your example, you can use MERGE, EQUALS and SUM FIELDS=NONE like this:
Code: |
//S1 EXEC PGM=ICEMAN
//SYSOUT DD SYSOUT=*
//SORTIN01 DD DSN=... master file
//SORTIN02 DD DSN=... transaction file
//SORTOUT DD DSN=... output file
//SYSIN DD *
OPTION EQUALS
MERGE FIELDS=(...)
SUM FIELDS=NONE
/*
|
The key here is to use SORTIN01 for the master file and SORTIN02 for the transaction file along with the EQUALS option. That way, the first record of each set of duplicates will be kept and that will be the record from the master file.
If the records are not already sorted, you can use this job instead:
Code: |
//S2 EXEC PGM=ICEMAN
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=... master file
// DD DSN=... transaction file
//SORTOUT DD DSN=... output file
//SYSIN DD *
OPTION EQUALS
SORT FIELDS=(...)
SUM FIELDS=NONE
/*
|
|
|
Back to top |
|
|
|