View previous topic :: View next topic
|
Author |
Message |
TS70363
New User
Joined: 07 Apr 2010 Posts: 94 Location: Bangalore, India
|
|
|
|
Hi..
Thru my Cobol application program, I am calling a native stored procedure using a host variable. It has various other SQLs as well.
Code: |
EXEC SQL
CALL :HV-PROC-NAME
END-EXEC |
1. I have to find out the schema/qualifier name and prefix it to the stored procedure before executing the CALL statement. Can I avoid doing so? If yes, what are the changes required?
2. It is a batch process. The stored procedure CALL statement is consuming a very high CPU. We have IBM APA tool to monitor the batches. I noticed , the CALL takes 35-40 % of the overall CPU of the batch. Also BMC Apptune (DB2 performance monitoring tool) suggests there are equivalent number of intermediate binds as the number of records processed. The STP has 35 input parameters and 3 output parameters.
Can anyone suggest the reason behind this behaviour?
Is it because the call is thru a host variable prefixed with qualifier?
Is it due to a high number of inout parameters?
Where is the intermediate bind occuring? |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8700 Location: Dubuque, Iowa, USA
|
|
|
|
Quote: |
The stored procedure CALL statement is consuming a very high CPU. |
This is pretty much a meaningless statement. Quantify things when you post. If "very high CPU" is 35 to 40 CPU seconds out of 100 total, you don't really have much room to improve things. If it is 700 to 800 out of 2000, that is different.
Quote: |
Is it because the call is thru a host variable prefixed with qualifier?
Is it due to a high number of inout parameters? |
What does "it" refer to here -- the 35 to 40% CPU usage or the intermediate binds? You are not clear which you're asking about.
What does your site support group and your DBA(s) say? |
|
Back to top |
|
|
TS70363
New User
Joined: 07 Apr 2010 Posts: 94 Location: Bangalore, India
|
|
|
|
Hi,
Records read: 100000
Records processed: 100000
CPU for the batch: 7.9 min
Number of COBOL DB2 modules involved in dynamic calling - 25+
Module containing the Stored procedure is the 10th module to be excuted in the series.
Is poor performance of CALL statement due to the call being thru the host variable prefixed with qualifier?
Is poor performance due to a high number of inout parameters?
I had a word with the DBA but they are unable to figure out...We can only find the statement that is performing poor..(CALL :hv-proc-name) |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
|
|
Why do you have native SP so called through batch? Why do you not convert it to a batch sub module? Do you know they have different address space where they execute?how would DBA or site approves such architecture with what advantages? |
|
Back to top |
|
|
TS70363
New User
Joined: 07 Apr 2010 Posts: 94 Location: Bangalore, India
|
|
|
|
Quote: |
Why do you have native SP so called through batch? |
This has been the design since years as this module will be used by Batch and Online.
Quote: |
Why do you not convert it to a batch sub module? |
Yes I can. But changing it to a Sub module and have SQLs within, does not answer the my problem.
I am more interested in knowing
1. How do I get rid of the qualifier from the stored procedure name before invoking it?
2. Does the poor performance of the CALL statement have any relation with the way it is invoked - i.e. Host variable and hv prefixed with a qualifier?
3. And will have many parameters in the SP cause a downgrade to the performance?
4. Will a SP that is invoked by HV rebind on every call? |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
|
|
Quote: |
This has been the design since years as this module will be used by Batch and Online. |
that doesn't mean it is right thing to do forever.what stops you staging the module as sub program and test yourself than doing the speculation work of what is the culprit,it might just take an hour to test and possibly could answer your all basic question and perhaps I wonder why DBA's could answer that? |
|
Back to top |
|
|
TS70363
New User
Joined: 07 Apr 2010 Posts: 94 Location: Bangalore, India
|
|
|
|
Hi Rohit,
I understand your point.
Quote: |
Why do you not convert it to a batch sub module? |
I am ready to make a change. Please confirm - you are asking me to move the SQLs within the Stored procedure to my COBOL program and test the performance.
I have 12 such stored procedures, the names of which are being picked up from a lookup DB2 table (kind of reference data tables) based on the key passed from the COBOL code. The SP name is then prefixed with the Schema name and used in the Call statement. Each SP has at least 3 SQLs.
Moving all the 36+ SQLs to program will require significant amount of time. I will have to justify the amount of time I take to move all the SQLs to the COBOL sub-program. Hence I am looking for the answers which I can then take it to my DBAs and Project Manager.
I can confirm the results after I have made the change but I will be really happy and grateful if you could give me the information I am looking for. |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8700 Location: Dubuque, Iowa, USA
|
|
|
|
Quote: |
Is poor performance of CALL statement due to the call being thru the host variable prefixed with qualifier?
Is poor performance due to a high number of inout parameters? |
The answer to both of your questions is ... maybe. Prefixing the host variable with qualifier won't cause poor performance, but it could be an indicator of complex logic in the program that COULD cause poor performance. A high number of input (and / or output) parameters is not, per se, going to cause poor performance. HOWEVER, a high number of input / output parameters may indicate some very complex logic in the program, which in turn COULD lead to high CPU usage and poor performance.
7.9 minutes of CPU time is 474 seconds, or about .00474 CPU seconds per record. For a COBOL / VSAM batch program, this would be abysmally poor performance -- but for DB2, depending upon how the code is written, it might merely be poor performance (or even acceptable performance). What do other batch DB2 programs AT YOUR SITE use for CPU time per record? |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
|
|
You don't have to make the change for 12 becaue you can find one SP which you think is a candiDate for poor performance so save the calling parameter somewhere and then put the same inout parms and SQL in a cobol db2 sug program and mock the same situation. I don't think this would take more time even if it takes some time that's well justified because then you get answers to all your question and propose to make those sps as a batch sub program or cics sub program and that is the solution for future time. |
|
Back to top |
|
|
GuyC
Senior Member
Joined: 11 Aug 2009 Posts: 1281 Location: Belgium
|
|
|
|
Rohit Umarjikar wrote: |
Why do you have native SP so called through batch? Why do you not convert it to a batch sub module? Do you know they have different address space where they execute?how would DBA or site approves such architecture with what advantages? |
native SP can be processed with zIIP and zAAP |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
Back to top |
|
|
mistah kurtz
Active User
Joined: 28 Jan 2012 Posts: 316 Location: Room: TREE(3). Hilbert's Hotel
|
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
|
|
Performance - Part 4.DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond. However, I am not saying calling a SP is not possible or is a crime but why when we can do best with sub programs? Because if the indexes are proper and joins are good then sql should be executing quick with no issues,hence strongly suspect the TS to test by staging the same query in a sub program. for me or at my shop SP's are really meant to be get called from a distributed environment but no restriction for others. |
|
Back to top |
|
|
TS70363
New User
Joined: 07 Apr 2010 Posts: 94 Location: Bangalore, India
|
|
|
|
Thanks all for your valuable suggestions.
Quote: |
What do other batch DB2 programs AT YOUR SITE use for CPU time per record? |
We use MSUs and it is a minimum of 5000 records per MSU.
Quote: |
You don't have to make the change for 12 becaue you can find one SP which you think is a candiDate for poor performance so save the calling parameter somewhere and then put the same inout parms and SQL in a cobol db2 sug program and mock the same situation. I don't think this would take more time even if it takes some time that's well justified |
I made the change and moved the SQLs within one of the SP to the program. I then, did a runstats and rebind and it proved to be fruitful. I got a run rate of 7 CPU minutes for 200000 records.
But the questions still remain unanswered. Why does the stored proc has poor performance in batch compared to embedded SQLs? |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8700 Location: Dubuque, Iowa, USA
|
|
|
|
From the DB2 for z/OS V10.0.0 Managing DB2 Performance manual:
Quote: |
You can improve the performance of stored procedures and user-defined functions by following certain recommendations.
Procedure
To improve the performance of stored procedures and user-defined functions, use any of the following recommendations:
•Update the ASUTIME column of the SYSIBM.SYSROUTINES catalog table to set processor limits for each stored procedures or function. The limits that you specify enable DB2® to cancel procedures or functions that loop.
•Limit the number of times that a stored procedure can terminate abnormally by specifying one of the following options: ◦The MAX ABEND COUNT field on installation panel DSNTIPX. The limit that you specify applies to all stored procedures and prevents a problem procedure from overwhelming the system with abend dump processing.
◦The STOP AFTER FAILURES option on the ALTER or CREATE PROCEDURE statement. The limit that you specify overrides the system limit that is specified in the MAX ABEND COUNT field to specify limits for specific stored procedures.
•Maximize the number of procedures or functions that can run concurrently in a WLM-established stored procedure address space.
•Group your stored procedures in WLM application environments. For more information, see Defining application environments.
•Use indicator variables in your programs and pass the indicator variables as parameters. When output parameters occupy a large amount of storage, passing the entire storage areas to your stored procedure can be wasteful. However, you can use indicator variables in the calling program to pass only a two-byte area to the stored procedure and receive the entire area from the stored procedure.
•Set a high-enough priority for the WLM-managed stored procedures address spaces.
•Set the performance-related options appropriately in the CREATE PROCEDURE statement. The following table shows the recommended values.
Table 1. Recommended values for performance-related options in the CREATE procedure statement.
Option
Recommend setting
PROGRAM TYPE SUB
STAY RESIDENT YES
PARAMETER STYLE GENERAL WITH NULLS or SQL
COMMIT ON RETURN NO for stored procedures that are called locally; YES for stored procedures that are called from distributed client applications in environments where sysplex workload balancing is not used.
•Do not use the DSNTRACE DD statement in any of your stored procedures address space startup procedures. DSNTRACE is a facility that can be used to capture all trace messages for offline reference and diagnosis. However, DSNTRACE greatly increases the stored procedure initialization overhead. Also, DSNTRACE does not function in a multitasking environment because the CAF does not serialize access to the DSNTRACE trace data set.
•Specify a large enough value for the CACHERAC subsystem parameter on DSNTIP installation panel. The CACHERAC parameter specifies how much storage to allocate for the caching of routine authorization information for all routines on DB2 the member.
•Set the CMTSTAT subsystem parameter to INACTIVE This setting causes distributed threads to become inactive at commit when possible. The inactive threads become available for thread reuse, and that reduces the amount of thread storage needed for the workload, by reducing the number of distributed active threads.
•Convert external stored procedures to native SQL procedures whenever possible. The body of a native SQL procedure is written in SQL, and DB2 does not generate an associated C program for native stored procedures. Native procedures typically perform better and have more functionality that external procedures.
•Study your workload for external stored procedures and functions carefully. You can use DB2 Performance Expert of DB2 Performance Monitor to monitor stored procedures and user-defined functions.
•Use partitioned data set extended (PDSE) members for load libraries that contain stored procedures. By using PDSE members, you might eliminate the need to stop and start the stored procedures address space because of growth in load libraries, because the new extent information is dynamically updated. If a load library grows from additions or replacements, the library might have to be extended. If you use partitioned data set (PDS) members, load failures might occur because the new extent information is not available.
Maximizing the number of procedures or functions that run in an address space
You can improve the performance of stored procedures and user-defined functions by maximizing the number of procedures or functions that can run concurrently in a WLM-established stored procedures address space.
Assigning stored procedures and functions to WLM application environments
You can assign procedures to WLM environments to route the work that is associated with the procedures to specific address spaces.
Accounting for nested activities
The accounting class 1 and class 2 CPU and elapsed times for triggers, stored procedures, and user-defined functions are accumulated in separate fields and exclude any time accumulated in other nested activity.
Providing cost information, for accessing user-defined table functions, to DB2
User-defined table functions add additional access cost to the execution of an SQL statement.
|
Your site support group should be verifying the WLM service class for stored procedures -- if not assigned correctly, WLM could have a large impact on their performance. However, the mere fact that IBM has so much to say about improving stored procedure performance is an indication that other customers have expressed concerns about stored procedure performance as well. |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3076 Location: NYC,USA
|
|
|
|
Quote: |
I made the change and moved the SQLs within one of the SP to the program. I then, did a runstats and rebind and it proved to be fruitful. I got a run rate of 7 CPU minutes for 200000 records. |
That is still very high, So I am sure something is not good either with the logic in the query or indexes are not proper being used and somewhere, it may be doing a TS scan. Would you like to mask and share the query to us ? As then someone can suggest you a better tuned query, if its in scope, also try doing reorg as well before runstats.
Quote: |
But the questions still remain unanswered. Why does the stored proc has poor performance in batch compared to embedded SQLs? |
Unless the subprogram performs faster as expected to be , there is no point spending time on this and finding answer. |
|
Back to top |
|
|
|