IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

DB2 Performance issue


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
mlp

New User


Joined: 23 Sep 2005
Posts: 91

PostPosted: Tue Oct 26, 2010 12:27 pm
Reply with quote

I have one DB2 table which has millions of rows and few online applications and many batch applications make use of this table for thier CRUD operations. Naturally the performnace of online and batch applications have been degraded over a period of time. Also table hosts some redundent rows which is used as history data and has huge retention period. These rows are rarely used. As retention period is huge the amount rows are increasing day by day.

I think rule based partitioning of tablespace can enhance the performance.

Please correct me if I am wrong, also please suggest any other measures those you can think of.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Tue Oct 26, 2010 7:22 pm
Reply with quote

Hello,

Quote:
Naturally the performnace of online and batch applications have been degraded over a period of time
Why would someone believe it is natural for performance to degrade over time. . . icon_confused.gif

As more is learned about the production environemnt, performance should improve not get worse.

If performance worsens, it is because of bad design (and improper high-volume testing), really bad code being added to a once solid environment, or scheduling conflicts that should actually be managed, not just allowed to drag the system down.

These rows are rarely used. If there are lots of rows that are rarely used, they should not impact performance - unless there are lots of sql queries that has to look at them only to ignore them. . .

Quote:
I think rule based partitioning of tablespace can enhance the performance.
Why? Suggest there is a bigger payoff determining which tables have proper keys defined to support the longer running queries. . .
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Tue Oct 26, 2010 8:00 pm
Reply with quote

what is the ratio of selects requiring the history and those that do not.

normally, history is offloaded to another table, especially if history is nor required in the core sql.
Back to top
View user's profile Send private message
mlp

New User


Joined: 23 Sep 2005
Posts: 91

PostPosted: Wed Oct 27, 2010 8:49 am
Reply with quote

I agree that history should be mentained in seperate table. But thats the way it is and as far as number of selects performed on the history part is concerned that would be 2-3 out of 100 attempts.

Also the table has good indexes. I mean, the columns which are regularly used in a where clause, are part of those indexes. When I say, "Naturally the performnace of online and batch applications have been degraded over a period of time" that means, over a period of time the users(applications) of this table increased and the shear load put on to this table is cause of the degradation in performance. Thats what I think. Correct me if I am wrong.

I think partitioning would devide the contigious tablespace into many fragments and I think by that I might be able to improve the availability of pages. Please let me know if this will improve the availability.
Back to top
View user's profile Send private message
GuyC

Senior Member


Joined: 11 Aug 2009
Posts: 1281
Location: Belgium

PostPosted: Wed Oct 27, 2010 1:26 pm
Reply with quote

It all depends on where the performance degradation comes from :
- more I/O (=qualifying rows are spread out on more pages)
- more getpages (= more rows qualifying after index-filtering, more nlevels in indexes)
- more rows returned to program.

more I/O can possibly be "solved" by partitioning and/or clustering or larger bufferpools
index filtering can possibly be helped by partition-filtering
nlevels can be brought down by partitioned indexes (if supported by partition-filtering)
more rows qualifying after index- & partition-filtering can be solved by index/partition review and/or by program review (more stage1 predicates)
more rows returned can only be solved by program-changes

partitioning might be needed to support parallelism if you're going in that direction to help performance (ie. elapsed)
Back to top
View user's profile Send private message
Nimesh.Srivastava

New User


Joined: 30 Nov 2006
Posts: 78
Location: SINGAPORE

PostPosted: Thu Oct 28, 2010 8:11 am
Reply with quote

mlp,
what is the concurrency of batch & online jobs, are they mutually exclusive i.e. when batch jobs are executing there wouldnt be any online jobs hitting the table and vice versa?
next, does the online jobs only require LIVE transactional data or they too need to access the historical redundant data?
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
No new posts exploiting Z16 performance PL/I & Assembler 2
No new posts Issue after ISPF copy to Linklist Lib... TSO/ISPF 1
No new posts Facing ABM3 issue! CICS 3
No new posts JOIN STATEMENT PERFORMANCE. DFSORT/ICETOOL 12
Search our Forums:

Back to Top