View previous topic :: View next topic
|
Author |
Message |
sameer
New User
Joined: 10 Mar 2005 Posts: 41 Location: United States Of America
|
|
|
|
I am running one DB2 program which deletes records from database table by taking input from a flat file.
I am passing Key(from flat) file, and then issueing a DELETE Command on table like :
Delete FROM <TABLE> WHERE KEY_FIELD = :HOST-KEY
I have around 12000 records in input file and the program is running dead slow. Almost a second for each delete operation.
While this job running, I am not able to browse/Query(SPUFI) the table. I am getting -911(Dead Lock).
I fear, this program locking ENTIRE table (TABLE LOCK). I have tried with Isolation Level (CS) while binding but no luck.
Any suggessions to improve the performance.
Some more info :
Input File -> Delete Operation -> write record to output file as DELETED.
Thanks
Sameer |
|
Back to top |
|
|
acevedo
Active User
Joined: 11 May 2005 Posts: 344 Location: Spain
|
|
|
|
what about COMMIT? |
|
Back to top |
|
|
sameer
New User
Joined: 10 Mar 2005 Posts: 41 Location: United States Of America
|
|
|
|
COMMIT after 1000 records. |
|
Back to top |
|
|
acevedo
Active User
Joined: 11 May 2005 Posts: 344 Location: Spain
|
|
|
|
In my opinion...too much operations without being committed, I'd try with, for example, 50.
what about KEY_FIELD, is really part of any index or the unique index? |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Quote: |
In my opinion...too much operations without being committed, I'd try with, for example, 50 |
acevedo,
Wont this worsen the situation further? |
|
Back to top |
|
|
ntmartins
New User
Joined: 03 Mar 2008 Posts: 11 Location: Lisboa, Portugal
|
|
|
|
If you have the key in a flat file, I believe that you can use a DB2 utility to make the delete.
Other solution is to commit every n records. Could be 50, 100 or 500.
But after running that job you should reorg the table. |
|
Back to top |
|
|
acevedo
Active User
Joined: 11 May 2005 Posts: 344 Location: Spain
|
|
|
|
arcvns wrote: |
Quote: |
In my opinion...too much operations without being committed, I'd try with, for example, 50 |
acevedo,
Wont this worsen the situation further? |
as far as I know, no. |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
AFAIK, the overall processing time is inversely proportional to commit frequency. |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Quote: |
AFAIK, the overall processing time is inversely proportional to commit frequency.
|
My bad.... I meant "directly". My own experience tells too many commits leading to so much processing time. |
|
Back to top |
|
|
sameer
New User
Joined: 10 Mar 2005 Posts: 41 Location: United States Of America
|
|
|
|
I have no luck even I changed COMMIT to 50.
Bind Statements
DSN SYSTEM(DB2T)
BIND PLAN(RKMMPG02) OWNER(PZ6597A) QUALIFIER(VMA1)+
NODEFER(PREPARE) VALIDATE(RUN) ISOLATION(UR)+
CACHESIZE(4096) CURRENTDATA(NO) DEGREE(1)+
SQLRULES(DB2) ACQUIRE(USE) RELEASE(DEALLOCATE)+
EXPLAIN(NO) REOPT(NONE) KEEPDYNAMIC(NO)+
IMMEDWRITE(NO) DBPROTOCOL(DRDA) ENCODING(37)+
DISCONNECT(EXPLICIT)+
MEMBER(RKMMPG02)+
LIBRARY('PZ6597A.CHGMAN.CUS3.#037764.DBR')+
ENABLE(*)+
ACTION(REPLACE) RETAIN
END
EXIT CODE(&LASTCC)
END
Anyone suggest changes ? |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
acevedo wrote: |
what about KEY_FIELD, is really part of any index or the unique index? |
What about this? |
|
Back to top |
|
|
ntmartins
New User
Joined: 03 Mar 2008 Posts: 11 Location: Lisboa, Portugal
|
|
|
|
I think that your input file should be on the same order as the index.
I had that kind of problem and the thing was that therewas a previous program making huge number of inserts. The solution was to reorg the table before execution and bind the program. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Hello,
I believe you need to find out why "one" delete takes a second.
One of my processes uses an external file of "keys" and deletes between 1 and 20 rows per key (the last run yesterday deleted 120+k) and always runs in less than a minute. .
Changing the frequency of the commits will probably not make much difference in the time it takes to process an individual delete. It might reduce the frequency of the -911, but won't have much impact on the time it takes to delete. |
|
Back to top |
|
|
sameer
New User
Joined: 10 Mar 2005 Posts: 41 Location: United States Of America
|
|
|
|
One more thing I have observed is, while the program running, we are not able to access the database at all.. even for browse.
1) It looks, program acquiring entire table lock untill it process the last record.
2) Though the program has a logic to commit every 50 records, it has been observed that its committing every second.
( I took 30 records and they are committed)
Where should we check the commit frequency other than the Program ? |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Hello,
I believe you need to change your focus. The frequency of the COMMITs is not your performance problem.
Quote: |
It looks, program acquiring entire table lock untill it process the last record |
Not a good practice. . . Every time i have encountered an entire table lock it has been due to poor design.
Quote: |
Though the program has a logic to commit every 50 records, it has been observed that its committing every second |
Earlier, you posted that it takes 1 second do accomplish 1 delete. I believe your first order of business needs to be defining what is really going on.
Right now it appears that there is something that runs undesirably, but little or nothing is known about where the resources are being used. |
|
Back to top |
|
|
acevedo
Active User
Joined: 11 May 2005 Posts: 344 Location: Spain
|
|
|
|
arcvns wrote: |
Quote: |
AFAIK, the overall processing time is inversely proportional to commit frequency.
|
My bad.... I meant "directly". My own experience tells too many commits leading to so much processing time. |
yes, but the OP talked about -911, and we know nothing about other transactions/jobs accesing the db2 table.. in that case commit will avoid -911.
I've read 1 second x 1 delete...yeap, that's a lot.
maybe, just maybe, another aproach is to download the table and (using cobol, sort, syncsort) get in a file with those rows that must be kept...then Load with Replace, again, I know nothing about other access to the table. |
|
Back to top |
|
|
dharmendra_kp
New User
Joined: 10 Mar 2005 Posts: 33 Location: Lewiston
|
|
|
|
I presume the input file is not having exact key. First the program is generating the key based on the input file and then its deleting. And in process of generating the key its going in loop. And also it appears there is no index on key field. You may also need to see the status if it has gone in chk pending state. Other wise there is no reason why it should take so much time. |
|
Back to top |
|
|
umasankarmf
New User
Joined: 08 Sep 2008 Posts: 43 Location: india
|
|
|
|
Hi dharmendra,
As per me you are absolutely currect. |
|
Back to top |
|
|
|