IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

INSERT in DB2 Code is taking lot of CPU time


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Mon Aug 06, 2007 11:28 pm
Reply with quote

Hi!

One of my code is doing insert into a table ,the insert is been taking lot of time.The respective table is referring another table in some other database for foreign key.

Two of the tables are ion different databases , could it be the reason that the INSERT is taking so much of time?

I wonder if the issue is related to Access path.

Please assist.

Thanks,
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Aug 06, 2007 11:49 pm
Reply with quote

Hello,

What is considered a "lot of time"? Is this elapsed time or cpu time (your subject mentions cpu time, but the text does not? If you observe excessive cpu time, how are you monitoring the cpu time?

Access path should not come into INSERTing a row - access path is for retrieving rows. The validation of the foreign key should be direct, not requiring lots of work by the optimizer.

If you post the table definitions and the insert code we may be able to spot something.

Does this insert always run unacceptably long or does it vary throughout the day? If you ran your job at 2 in the morning, does your job run "better"?
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Tue Aug 07, 2007 7:47 am
Reply with quote

Hi Dick,

Thanks for the updates.

I got CPU statistics from the APPTUNE tool that we use to monitor the performance of SQL queries.

The INSERT query is having high CPU consumption almost every time its running.The job rans quite a few times in a day as its dataset triggered.

There are other INSERTS also but this is the one thats taking most of the time.

Actually i asked about access path as the INSERT is done after fetching data from some other table that resides on different database than this table.

Will be posting the structure of the table as well.

When you said that the foreign key shud be direct does that mean that the tables should reside in the same database.Please confirm.

Thanks again.
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Tue Aug 07, 2007 8:20 am
Reply with quote

Hello,

Quote:
does that mean that the tables should reside in the same database
Not necessarily. What i meant was that the foreign key should not need much work from the optimizer to process - that validation should require few resources.

Does your measurement tool have the flexbility to differentiate which processes are using which resources? I suspect that if there are complex fetches for the data needed before the INSERT is issued, that is where the time is being spent.

Do you have a production size test environment (or might one be built for testing this)? It might be interesting to run this "insert" process with the actual INSERT bypassed and see how much the time-used changes.
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Tue Aug 07, 2007 4:28 pm
Reply with quote

Thanks for the updates!

Please find below the table definitions:
Dependent Table/Column Constrnt R Referenced Table
---1----v----2----v----3----v----4----v----5----v----6----v
MUS.T5814 SE010008 R MUS.T8134
FK_SAID FK_SAID
FK_DATA_TYPE FK_DATA_TYPE
FK_INPUT_PROC_DT INPUT_PROCESS_DT
FK_INPUT_PROC_TM INPUT_PROCESS_TM

Average RECLENGTH is 4056 and the tablespace is having a PCTFREE of 10.We are also doing REORG on the table every week wherein previously we were doing it monthly.
Do you think that by changing PCTFREE we can arrive onto some positive results, also if yes what would be ideal value for PCTFREE where Average RECLENGTH is 4056 and average no of records inserted is over million .

Thanks,
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Tue Aug 07, 2007 6:05 pm
Reply with quote

Hello,

Please re-read my prior reply. . . and answer the last 2 items.

Thanks.
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Tue Aug 07, 2007 10:17 pm
Reply with quote

Have my aplogies just missed out answering your questions.

-The tool just check in how much CPU consumption was there for each SQL query.The fetch thing is based on the Refrential integrity I have sent you in table definitions before.

-We have Test envirionment wherein I can test production data.I will have to create the test tables under the same scenario i.e on diffrent databases .

Thanks,
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Aug 08, 2007 5:21 am
Reply with quote

Hello,

It may help if you post the actual querry(ies) that are performing poorly.

When you INSERT into this table of 4k bytes, how many columns are being filled with data? If the INSERT needs to do many, many data movements and some of them are numeric conversion (say from zoned decimal to packed), that will surely increase the cpu time. If the INSERTS that do not use much cpu time move only a few character fields, they would use less cpu time.
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Fri Aug 10, 2007 11:59 pm
Reply with quote

Hi,

The actual query is :
EXEC SQL

INSERT INTO EMP_TRAN_DATA

(FK_FAID
,FK_DATA_TYPE
,FK_INPUT_PROC_DT
,FK_INPUT_PROC_TM
,SEQ_NO
,VARIABLE_DATA_STRI)

VALUES (:EMP_TRAN_DATA .FK-FAID
,:EMP_TRAN_DATA .FK-DATA-TYPE
,:EMP_TRAN_DATA .FK-INPUT-PROC-DT
,:EMP_TRAN_DATA .FK-INPUT-PROC-TM
,:SEQ-NO
,:VARIABLE-DATA-STRI)

END-EXEC.

Have checked into the code there aren't any moves from decimal to packed values.The VAR String is declared as 4024 but the values in table is max 400 characters for this string.

Please suggest.

Thanks,
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sat Aug 11, 2007 12:20 am
Reply with quote

Hello Priyanka,

Does the INSERT name all of the columns in the table? What keys are defined for this table? When an INSERT is processed, there is extra overhead for adding the keys.

Have you been able to try this yet?

Quote:
-We have Test envirionment wherein I can test production data.I will have to create the test tables under the same scenario i.e on diffrent databases .


I am still interested in seeing the statistics from a run that actually inserts rows versus one that does not issue the INSERT.

Quote:
It might be interesting to run this "insert" process with the actual INSERT bypassed and see how much the time-used changes.


It might also be interesting to run a 3rd variation where the variable string is not mentioned. That would give us 3 sets of numbers to look at.

We would want all 3 tests/stats in the Test environment so that the "base" would be consistent.
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Sat Aug 11, 2007 3:33 pm
Reply with quote

Hi,

Yes,the INSERT mentioned above contains all the columns from the table.

I didn't get a chance to run the job in test as there are not only one INSERT happening in the code but multiple on different table , to create the setup will take some time.

Have noticed that the code does not have very complex movements of data moreover the apllication is running for past 10 yrs and suddenly the problem has occurred.

Would provide you the

Thanks,
Priyanka
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Mon Aug 13, 2007 6:46 pm
Reply with quote

Could you please help me understand the meaning of the below reports results that I have generated from the tool for the respective insert query:

This INSERT statement was executed 695864 times during the reported

The statement's average IN-SQL elapsed time was 00:00.00040 and the
average IN-SQL CPU time per statement was 00:00.00018.

The number of GETPAGEs required for this statement averages 5.91,
which is excellent (less than 10 ). 5.82 pages, or 98.5 %, are
retrieved from the buffer pool without incurring the cost of an I/O.
This buffer pool hit ratio is excellent (greater than 95 %).

This statement spends 44.4 % of its time in CPU, which is the single
largest component of its elapsed time.


This statement also spends 27.5 % of its time in OTHER WAIT time.
It waits 27.5 % of its elapsed time for Log Write I/O


Thanks,
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Aug 13, 2007 6:59 pm
Reply with quote

Hello,

For us to be helpful, you need to read the replies and provide the info requested. Please re-read my reply from Aug 10 and provide all of the requested info.

Additionally, when you post new info, be complete.
Quote:
This INSERT statement was executed 695864 times during the reported
What was the report interval - it can make a very big difference.
Quote:
This statement spends 44.4 % of its time in CPU
This is not necessarily bad. It just means that the transaction did very few i/o's. If the time was not spent in cpu, it would require much more i/o.

Many places are coming to the incorrect conclusion that their processes require "too much" cpu or i/o - it might be nice to use less, but if you do lots of adds/deletes, have lots of secondary index difinitions, and support unlimited query volumes, things will take longer. . .
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Mon Aug 13, 2007 8:59 pm
Reply with quote

you mentioned that you have a 4056 row size - thus 1 row per page - yet you use only 400 or so char of the varchar column defined at 4024 - your rows are larger than necessary - too much space is unused. Do you anticipate a defined future need for a 4024 varchar?

are you using compression? have you thought of reducing the size of the varchar to something less in order to allow more that one row per page.

you also mentioned log write i/O. maybe the log can be tuned? (based on your selections for varchar size, I would not be supprised if non-optimal log size has also be selected). Also:
craig mullins, DB2 Del Guide wrote:
Ensure that your log data sets are on different volumes and on separate channels. Avoid placing more than one active log data set on the same disk volume. Otherwise, the whole reason for having dual active logs is negated and the overall performance of DB2 will be impaired significantly during the log offloading process.



I doubt if the referential integrity issues are causing you problems. All that activity should be on indexes, that should be in memory (buffer pools).
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Mon Aug 13, 2007 10:05 pm
Reply with quote

Thanks for all the help.

The primary keys on the table are following:
FK_FAID
FK_DATA_TYPE
FK_INPUT_PROC_DT
FK_INPUT_PROC_TM
SEQ_NO

-Have sent the above statistics from the tool.
-There seems some issues running the queries in test region thats why coudnt provide any outputs.

-Have still not understood this part regarding Log Write I/O.Could you please eloborate this a little more.

Thanks,
Priyanka
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Mon Aug 13, 2007 10:14 pm
Reply with quote

Hello,

Yes, you sent some statistics. What you sent is not complete enough to be of use.
Quote:
Quote:
Quote:
This INSERT statement was executed 695864 times during the reported

What was the report interval - it can make a very big difference
Nearly 700k inserts for one week is nothing - for one minute 700k is a major impact . . . .

There is only one primary key for a table
Quote:
The primary keys on the table are following:
FK_FAID
FK_DATA_TYPE
FK_INPUT_PROC_DT
FK_INPUT_PROC_TM
SEQ_NO
Do you mean that all of these fields make up the primary key? Are there any other keys defined for the table?
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Mon Aug 13, 2007 11:20 pm
Reply with quote

Hi,

Yes all the keys are been used as primary keys.

I myself have not understood the results generated from the tool.The INSERT is happening quite a few times in a day depending upon the no of times data has come again and again with update.

There are numerous inserts happening in the code on various tables depending on the kind of data received .

Thanks for all the help.

-Would like to have your suggestion on another aspect also, PCTFREE which is 10 now ,could you please confirm if its really is required under this scenario.

Thanks,
Priyanka
Back to top
View user's profile Send private message
priyankassharma

New User


Joined: 29 May 2007
Posts: 16
Location: mumbai

PostPosted: Mon Aug 13, 2007 11:37 pm
Reply with quote

Hi,

Is there some way that I can get to know the current filed value for log and can i alter it?

Thanks
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts run rexx code with jcl CLIST & REXX 15
No new posts Compile rexx code with jcl CLIST & REXX 6
No new posts To get the the current time DFSORT/ICETOOL 13
No new posts RC query -Time column CA Products 3
No new posts C Compile time time stamps Java & MQSeries 10
Search our Forums:

Back to Top