View previous topic :: View next topic
|
Author |
Message |
Nagendran
New User
Joined: 24 Jan 2007 Posts: 89 Location: USA
|
|
|
|
Hi,
I am using DB2 Load utility to load the records in to the DB2 DB.while loading i want to discard the duplicate records getting inserted into the DB.
Note:There is no primary key in the DB.
For eg: Say a record of length 50 bytes.i have to enforce constraints on these 50 bytes to check for duplicate.
anybody plz help me.........
Regards,
Nagu |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8796 Location: Welsh Wales
|
|
|
|
Why not sort the input file first, and use the capabilities of the sort product to remove duplicate records. |
|
Back to top |
|
|
Nagendran
New User
Joined: 24 Jan 2007 Posts: 89 Location: USA
|
|
|
|
thats not my prob....
The problem for me here is when i try to insert a record which is already in DB it should be discarded. |
|
Back to top |
|
|
stodolas
Active Member
Joined: 13 Jun 2007 Posts: 631 Location: Wisconsin
|
|
|
|
Write a program to check for existence in the table first. Or define a primary key across all fields in the table. I don't believe the load utility can perform any logic. |
|
Back to top |
|
|
SharathG
New User
Joined: 23 Jan 2007 Posts: 12 Location: India
|
|
|
|
Hi,
The best option to load a table from a sequential dataset is to use a Load with Replace option. It will replace all the existing data in the table with data from the PS dataset. Here's the syntax: -
LOAD DATA LOG NO
REPLACE
NOCOPYPEND
DISCARDS 1
STATISTICS TABLE(ALL) INDEX(ALL)
INDDN SYSREC00
INTO TABLE <table_name>
(Columns)
You can add this in the Customized Load Control Card and use it while Loading. |
|
Back to top |
|
|
ashwinreddy
Active User
Joined: 16 Sep 2004 Posts: 106 Location: Hyderabad
|
|
|
|
Hi,
We use Replace option to replace all the data in a tale space.
But not to eliminate the duplicates.
As per my knowledge we can't aviod the duplicates in Load Utility.
As other members suggested write program to aviod duplicates or a jcl to aviod the dulicates.
My knowledge may be limited, lets see from other memebrs.
Cheers
Ashwin |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Hello,
From 2 of your posts:
Quote: |
The problem for me here is when i try to insert a record which is already in DB it should be discarded. |
Quote: |
Note:There is no primary key in the DB. |
With no primary key, how would you identify a duplicate?
If you mean entire-record duplicates, i suppose you could unload the table, combine the new and the old records by sorting on the entire record length while discarding duplicates, and then load the output file back into the table. |
|
Back to top |
|
|
stodolas
Active Member
Joined: 13 Jun 2007 Posts: 631 Location: Wisconsin
|
|
|
|
Table access without a primary key will be fine on small tables, but if the table is bigger than a few thousand records, you are going to start to see poor performance without a primary key also.
There are very few reasons for not defining a primary key. |
|
Back to top |
|
|
Nagendran
New User
Joined: 24 Jan 2007 Posts: 89 Location: USA
|
|
|
|
I got the solution from you....
Thanks for all..... |
|
Back to top |
|
|
Suryanarayana.tadala
New User
Joined: 03 Nov 2005 Posts: 43 Location: St.Louis
|
|
|
|
Nagendran..
Could you please elaborate us with the solution that you have !! |
|
Back to top |
|
|
Nagendran
New User
Joined: 24 Jan 2007 Posts: 89 Location: USA
|
|
|
|
Hi,
1)I wrote a query in QMF to identify the duplicate records,
after that i have moved that duplicate records to a file.
2)Then i wrote a program to delete all the duplicate records from the DB.
3)Next by using Load utility i have loaded the records in the file to
the DB.
thanks,
Nagu |
|
Back to top |
|
|
Suryanarayana.tadala
New User
Joined: 03 Nov 2005 Posts: 43 Location: St.Louis
|
|
|
|
Thanks...I was under the impression that you have some utlity which does all this !! |
|
Back to top |
|
|
|