View previous topic :: View next topic
|
Author |
Message |
Magna
New User
Joined: 30 Oct 2006 Posts: 24
|
|
|
|
I read a input file and with the key fields read in that we
delete the record from the table.
This is increasing the time and also since it hits the table multiple times the performance is badly affected.
Is there's any way to avoid this situation?
Can we do this using some Utility???
or
can we tune the program simply by using a time comit instead of a frequnce comit using the number of records processed?? |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Why does the table need to be "hit" multiple times for each delete record?
The timing of the commit will not have much (if any) tuning impact. It could impact the time to rollback or perform a restart/recovery.
Let's call your file of keys the "transaction file" and your databsae table the "master file". How many rows are there in the master file? How many records are in the transaction file? If the number of delete trransactions is a significant percent of the number of rows in the master file, it may perform better if the table was unloaded, the files matched sequentially dropping the keys to be deleted, and the table reloaded.
How long does the process run now? Are there multiple keys on this table? If there are multiple keys per row, additional overhead is required to remove them.
Deleting many rows, one row at a time is often a performance issue. |
|
Back to top |
|
|
Magna
New User
Joined: 30 Oct 2006 Posts: 24
|
|
|
|
Well we have more than 3 millinon records and its a nightly update in the table... so unloading it evry night and reloading would be a problem...
We already thought of this but then since we didn't got a negative reply for this idea we drpped that idea... |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Please re-phrase this
Quote: |
but then since we didn't got a negative reply for this idea we drpped that idea... |
It is not at all clear.
You also did not answer the questions above - for us to provide any good suggestions, you need to provide the details.
With the little you've posted, i would imagine that the unload/load will not be a problem. For your site 3 million rows may be a large table - for many it is rather small. Something that could make a difference is how many keys the table has. The more keys, the slower individual deletes run. . .
If you have not done so already, it would be a good idea to create a full-size test version of the table and see how long it takes to load. |
|
Back to top |
|
|
|