We have one daily running job and in this job one program is getting called in this program below query is taking huge
time to execute, around 95-96% time it's wasting in I/O calls. Please suggest how I can optimize this query.
Please find required details here.
WHERE XX_NUM = :XYZ-ABC.XX-NUM (Table host variables)
AND YY_NUM = :XYZ-ABC.YY-NUM (Table host variables)
AND X_YEAR = :WS-YEAR-CUR (Working-storage variables)
AND Y_MONTH = :WS-MONTH-CUR (Working-storage variables)
GROUP BY YY_NUM, XX_NUM,
THIS SELECT STATEMENT WAS EXECUTED 1605K TIMES DURING THE REPORTED
THE STATEMENT'S AVERAGE IN-SQL ELAPSED TIME WAS 00:00.00486 AND THE
AVERAGE IN-SQL CPU TIME PER STATEMENT WAS 00:00.00011.
THE NUMBER OF GETPAGES REQUIRED FOR THIS STATEMENT AVERAGES 3.42,
WHICH IS EXCELLENT (LESS THAN 10 ). 0.00 PAGES, OR 0.0 %, ARE
RETRIEVED FROM THE BUFFER POOL WITHOUT INCURRING THE COST OF AN I/O.
THIS BUFFER POOL HIT RATIO IS POOR (LESS THAN OR EQUAL TO 50 %).
THIS STATEMENT SPENDS 97.9 % OF ITS TIME IN I/O, WHICH IS THE SINGLE
LARGEST COMPONENT OF ITS ELAPSED TIME.
APPTUNE LOGS :
CALL SECT STMT +--- SQL --+ +- TOTAL IN-SQL TIME -+
TYPE NO. NO. CALLS ERRS ELAPSED CPU GETPAGES
-------- ----- ------ ------ ----- ----------- ----------- --------
SELECT 30 15505 1502K 0 2:17:03.163 02:26.77652 5243681
AVERAGES: 00:00.00548 00:00.00010 3
the statement was executed about 1.5 millions times
the I/O does not depend on the query per se, but from the <data/row> access pattern and <data/row> distribution
if the data accessed is <uniformly> spread across the whole database/table
no query/buffering optimization will get rid of the I/O,
unless You make the buffer pool capable of holding ...
number of rows returned for a select times 1500000,
but then You will face problems somewhere else
if x_amt is not updated often, a possible solution could be to add x_amt in the index as an include column.
You didn't mention the clusterratio of the index and whether you are allowed to manipulate the order of the input-records/variables.
Joined: 23 Nov 2006 Posts: 19254 Location: Inside the Matrix
How many rows are in the table? How many of these rows must be "touched" by the process?
Sometimes unloading data and working from a flat file performs far better than a query run over and over or some "monster query". Yours does not appear to be a "monster" by itself, but the number of executions would surely drive up the cost.
It may help someone help you if you explain the entire process briefly and what this particular query does for the process. Possibly there are alternatives . . .