A particular DB2 table has around 20 million records. This table needs to be updated with around 10 million records which exist in a flat file. Some of the data in the file will be updates to the table (against inserts). Note that in case of updates, the file records must be updated to the existing table data. There are 2 ways to accomplish this -
1. Unload the table data to another flat file. Perform a file matching logic to check updates against inserts. Create one file containing, original un-affected table records, update records and new inserts. Then load replace or resume no the table.
2. Use fileaid with SQL INSERT option to directly update/insert into the table.
I also have to keep tab of how many records got updated and how many were inserted after the load is done.
Given the above scenario, which method should I use? Which method will be faster? If I am using method 2, can I capture the record count for updates and inserts and report the same? Are there any other method that is more suitable and I am ignoring?