IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

DB2 SQL Performance issue


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
sameer

New User


Joined: 10 Mar 2005
Posts: 41
Location: United States Of America

PostPosted: Fri Dec 05, 2008 5:54 pm
Reply with quote

I am running one DB2 program which deletes records from database table by taking input from a flat file.

I am passing Key(from flat) file, and then issueing a DELETE Command on table like :

Delete FROM <TABLE> WHERE KEY_FIELD = :HOST-KEY

I have around 12000 records in input file and the program is running dead slow. Almost a second for each delete operation.

While this job running, I am not able to browse/Query(SPUFI) the table. I am getting -911(Dead Lock).

I fear, this program locking ENTIRE table (TABLE LOCK). I have tried with Isolation Level (CS) while binding but no luck.

Any suggessions to improve the performance.

Some more info :
Input File -> Delete Operation -> write record to output file as DELETED.

Thanks
Sameer
Back to top
View user's profile Send private message
acevedo

Active User


Joined: 11 May 2005
Posts: 344
Location: Spain

PostPosted: Fri Dec 05, 2008 6:32 pm
Reply with quote

what about COMMIT?
Back to top
View user's profile Send private message
sameer

New User


Joined: 10 Mar 2005
Posts: 41
Location: United States Of America

PostPosted: Fri Dec 05, 2008 6:35 pm
Reply with quote

COMMIT after 1000 records.
Back to top
View user's profile Send private message
acevedo

Active User


Joined: 11 May 2005
Posts: 344
Location: Spain

PostPosted: Fri Dec 05, 2008 6:44 pm
Reply with quote

In my opinion...too much operations without being committed, I'd try with, for example, 50.


what about KEY_FIELD, is really part of any index or the unique index?
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Fri Dec 05, 2008 6:48 pm
Reply with quote

Quote:
In my opinion...too much operations without being committed, I'd try with, for example, 50
acevedo,

Wont this worsen the situation further?
Back to top
View user's profile Send private message
ntmartins

New User


Joined: 03 Mar 2008
Posts: 11
Location: Lisboa, Portugal

PostPosted: Fri Dec 05, 2008 7:01 pm
Reply with quote

If you have the key in a flat file, I believe that you can use a DB2 utility to make the delete.

Other solution is to commit every n records. Could be 50, 100 or 500.

But after running that job you should reorg the table.
Back to top
View user's profile Send private message
acevedo

Active User


Joined: 11 May 2005
Posts: 344
Location: Spain

PostPosted: Fri Dec 05, 2008 7:17 pm
Reply with quote

arcvns wrote:
Quote:
In my opinion...too much operations without being committed, I'd try with, for example, 50
acevedo,

Wont this worsen the situation further?


as far as I know, no.
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Fri Dec 05, 2008 8:25 pm
Reply with quote

AFAIK, the overall processing time is inversely proportional to commit frequency.
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Fri Dec 05, 2008 8:43 pm
Reply with quote

Quote:
AFAIK, the overall processing time is inversely proportional to commit frequency.
My bad.... I meant "directly". My own experience tells too many commits leading to so much processing time.
Back to top
View user's profile Send private message
sameer

New User


Joined: 10 Mar 2005
Posts: 41
Location: United States Of America

PostPosted: Fri Dec 05, 2008 8:53 pm
Reply with quote

I have no luck even I changed COMMIT to 50.

Bind Statements

DSN SYSTEM(DB2T)
BIND PLAN(RKMMPG02) OWNER(PZ6597A) QUALIFIER(VMA1)+
NODEFER(PREPARE) VALIDATE(RUN) ISOLATION(UR)+
CACHESIZE(4096) CURRENTDATA(NO) DEGREE(1)+
SQLRULES(DB2) ACQUIRE(USE) RELEASE(DEALLOCATE)+
EXPLAIN(NO) REOPT(NONE) KEEPDYNAMIC(NO)+
IMMEDWRITE(NO) DBPROTOCOL(DRDA) ENCODING(37)+
DISCONNECT(EXPLICIT)+
MEMBER(RKMMPG02)+
LIBRARY('PZ6597A.CHGMAN.CUS3.#037764.DBR')+
ENABLE(*)+
ACTION(REPLACE) RETAIN
END
EXIT CODE(&LASTCC)
END

Anyone suggest changes ?
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Fri Dec 05, 2008 9:00 pm
Reply with quote

acevedo wrote:
what about KEY_FIELD, is really part of any index or the unique index?
What about this?
Back to top
View user's profile Send private message
ntmartins

New User


Joined: 03 Mar 2008
Posts: 11
Location: Lisboa, Portugal

PostPosted: Fri Dec 05, 2008 9:52 pm
Reply with quote

I think that your input file should be on the same order as the index.

I had that kind of problem and the thing was that therewas a previous program making huge number of inserts. The solution was to reorg the table before execution and bind the program.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Fri Dec 05, 2008 10:08 pm
Reply with quote

Hello,

I believe you need to find out why "one" delete takes a second.

One of my processes uses an external file of "keys" and deletes between 1 and 20 rows per key (the last run yesterday deleted 120+k) and always runs in less than a minute. .

Changing the frequency of the commits will probably not make much difference in the time it takes to process an individual delete. It might reduce the frequency of the -911, but won't have much impact on the time it takes to delete.
Back to top
View user's profile Send private message
sameer

New User


Joined: 10 Mar 2005
Posts: 41
Location: United States Of America

PostPosted: Mon Dec 08, 2008 4:56 pm
Reply with quote

One more thing I have observed is, while the program running, we are not able to access the database at all.. even for browse.

1) It looks, program acquiring entire table lock untill it process the last record.

2) Though the program has a logic to commit every 50 records, it has been observed that its committing every second.
( I took 30 records and they are committed)

Where should we check the commit frequency other than the Program ?
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Dec 08, 2008 9:31 pm
Reply with quote

Hello,

I believe you need to change your focus. The frequency of the COMMITs is not your performance problem.

Quote:
It looks, program acquiring entire table lock untill it process the last record
Not a good practice. . . Every time i have encountered an entire table lock it has been due to poor design.

Quote:
Though the program has a logic to commit every 50 records, it has been observed that its committing every second
Earlier, you posted that it takes 1 second do accomplish 1 delete. I believe your first order of business needs to be defining what is really going on.

Right now it appears that there is something that runs undesirably, but little or nothing is known about where the resources are being used.
Back to top
View user's profile Send private message
acevedo

Active User


Joined: 11 May 2005
Posts: 344
Location: Spain

PostPosted: Tue Dec 09, 2008 1:09 pm
Reply with quote

arcvns wrote:
Quote:
AFAIK, the overall processing time is inversely proportional to commit frequency.
My bad.... I meant "directly". My own experience tells too many commits leading to so much processing time.



yes, but the OP talked about -911, and we know nothing about other transactions/jobs accesing the db2 table.. in that case commit will avoid -911.

I've read 1 second x 1 delete...yeap, that's a lot.

maybe, just maybe, another aproach is to download the table and (using cobol, sort, syncsort) get in a file with those rows that must be kept...then Load with Replace, again, I know nothing about other access to the table.
Back to top
View user's profile Send private message
dharmendra_kp

New User


Joined: 10 Mar 2005
Posts: 33
Location: Lewiston

PostPosted: Tue Dec 09, 2008 5:12 pm
Reply with quote

I presume the input file is not having exact key. First the program is generating the key based on the input file and then its deleting. And in process of generating the key its going in loop. And also it appears there is no index on key field. You may also need to see the status if it has gone in chk pending state. Other wise there is no reason why it should take so much time.
Back to top
View user's profile Send private message
umasankarmf

New User


Joined: 08 Sep 2008
Posts: 43
Location: india

PostPosted: Tue Dec 09, 2008 5:37 pm
Reply with quote

Hi dharmendra,
As per me you are absolutely currect.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
No new posts exploiting Z16 performance PL/I & Assembler 2
No new posts Issue after ISPF copy to Linklist Lib... TSO/ISPF 1
No new posts Facing ABM3 issue! CICS 3
No new posts JOIN STATEMENT PERFORMANCE. DFSORT/ICETOOL 12
Search our Forums:

Back to Top