Hello - we have a DB2 system that comprises of about 100 DB2 tables in systems test. Many of the tables have RI associated with them. Overall it holds about 5000 rows so not very big. I have been asked to reload a new set of data. Before doing this I decided to backup the exiting 100 tables to QSAM files and then reload them using LOAD REPLACE. I expected no issues, but alas, it failed with both RI issues and duplicate key issues. To get around the RI issues I loaded with ENFORCE NO mainly because I didn't know the parent/child relationship and then took the tables out of check pending via check data. However, I still have lots of duplicate key issues. How can it be that if I did a simple copy/reload, that I have such issues with duplicate keys ? Is it possible that over the last few years, some of the tables have been altered from 'allow duplicates' to 'not allow duplicates' ? Its one of those tasks that I thought would be simple to suddenly becoming a headache. Any thoughts ?
Also, are you sure of doing LOAD REPLACE and not LOAD RESUME assuming you are reloading the same unloaded data and not the one from other sources, if its all new set of data then the data quality needs to be checked against RI?
Joined: 28 Aug 2007 Posts: 1691 Location: Tiruppur, India
Is it possible that you had a SHRLEVEL of CHANGE for the UNLOAD. So if the unload was done during update,delete,insert to the table, it might create duplicates or missing rows or incorrect values.
Specifies whether other processes can access or update the table space or partitions while the data is being unloaded.
UNLOAD ignores the SHRLEVEL specification when the source object is an image copy data set.
The default value is SHRLEVEL CHANGE ISOLATION CS.
Specifies that rows can be read, inserted, updated, and deleted from the table space or partition while the data is being unloaded.
Specifies the isolation level with SHRLEVEL CHANGE.
Indicates that the UNLOAD utility is to read rows in cursor stability mode. With CS, the UNLOAD utility assumes CURRENTDATA(NO).
Indicates that uncommitted rows, if they exist, are to be unloaded. The unload operation is performed with minimal interference from the other DB2 operations that are applied to the objects from which the data is being unloaded.
SKIP LOCKED DATA
Specifies that the UNLOAD utility is to skip rows on which incompatible locks are held by other transactions. This option applies to a row level or page level lock.
Specifies that during the unload operation, rows of the tables can be read, but cannot be inserted, updated, nor deleted by other DB2 threads.
When you specify SHRLEVEL REFERENCE, the UNLOAD utility drains writers on the table space from which the data is to be unloaded. When data is unloaded from multiple partitions, the drain lock is obtained for all of the selected partitions in the UTILINIT phase.
Joined: 14 Oct 2005 Posts: 1206 Location: Bangalore,India
Just another Thought...
I hope that you dont have Identity column defined on these table . If Identity column are part of these table I suggest to use IDENTITYOVERRIDE parameter with enforce no followed By check Data because I have seen in Shop using IDENTITY column as Primary key and later use it as Fkey in child tables.