IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Regarding Cursor Overhead


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Tue Jan 16, 2007 11:56 am
Reply with quote

Hi all,

I have a program logic which will require OPEN-FETCH-CLOSE of a cursor for as many as 1 million times( = no. of records read from a file)

I am concerned and confused about the overhead this may cause on DB2.

Is it advisable to go-ahead with this logic?




Thanks,
Asif
Back to top
View user's profile Send private message
priyesh.agrawal

Senior Member


Joined: 28 Mar 2005
Posts: 1448
Location: Chicago, IL

PostPosted: Tue Jan 16, 2007 12:01 pm
Reply with quote

It'll ofcourse cause... whats the need to open and close cursor so many times...
If not really required, put that statements (Open and Close) out of the processing loop...
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Tue Jan 16, 2007 12:12 pm
Reply with quote

I understand ur point Priyesh, but my requirement is:

1) Read a rec from input file

2) For each rec read, query the table and get the result table created. The result table will have multiple rows. So I need to use cursors.

3) In essence, for every record read from file there has to be an OPENing of cursor.


So this is the case...I do have an alternate logic which will use unload FILE of table, but first I wanted to confirm from u all...


Please let me know ur views.



Thanks,
Asif
Back to top
View user's profile Send private message
William Thompson

Global Moderator


Joined: 18 Nov 2006
Posts: 3156
Location: Tucson AZ

PostPosted: Tue Jan 16, 2007 3:46 pm
Reply with quote

Are you using the row retrieved or just looking for their existence? you should be able to optimize the select for one row.
If your requirements need this, use it. Parallel timing against the unloaded table might be useful.
Possibly creating a summarized version of the larger table could be employed?
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Tue Jan 16, 2007 6:25 pm
Reply with quote

Yes I have taken the last approach suggested by you:

Quote:
Possibly creating a summarized version of the larger table could be employed?


But I have one another doubt:

My OPEN cursor will create a HUGE result table (> 1 million rows).

I don't know if it is ok or it will create some performance issues.

Please suggest.


Thanks.
[/quote]
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Tue Jan 16, 2007 6:26 pm
Reply with quote

Yes I have taken the last approach suggested by you:

Quote:
Possibly creating a summarized version of the larger table could be employed?


But I have one another doubt:

My OPEN cursor will create a HUGE result table (> 1 million rows).

I don't know if its OK or it will cause some performance issues.

Please suggest.


Thanks.
Back to top
View user's profile Send private message
William Thompson

Global Moderator


Joined: 18 Nov 2006
Posts: 3156
Location: Tucson AZ

PostPosted: Tue Jan 16, 2007 6:29 pm
Reply with quote

Asif Iqbal wrote:
My OPEN cursor will create a HUGE result table (> 1 million rows).
Do you need to process all "> 1 million rows"? If not, how many and which ones?
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Tue Jan 16, 2007 6:33 pm
Reply with quote

I have to process all the rows in my program.
Back to top
View user's profile Send private message
William Thompson

Global Moderator


Joined: 18 Nov 2006
Posts: 3156
Location: Tucson AZ

PostPosted: Tue Jan 16, 2007 6:39 pm
Reply with quote

It will probably cause some performance issues....
Sounds like you might need to do some commits and program for restartablilty.... That would improve things.....
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Tue Jan 16, 2007 6:43 pm
Reply with quote

Ok William, Thanks 4 ur response.

Let me see this and I will inform if any issues...
Back to top
View user's profile Send private message
adarsha

New User


Joined: 28 Dec 2006
Posts: 8
Location: Noida,Delhi

PostPosted: Tue Jan 16, 2007 7:59 pm
Reply with quote

The idea of downloading all the data into the flat file in the preious step is a better idea insted of using the SQL stamets simply for retrieving the data from relations, which usually affect the performance of the system (including precompile, compile,bind & execution.)

one more conlict is that sometimes the data retrieved into the flat file will contain some junk values which are added in place of spaces by the
LOAD utility...

u should be careful for that if your program is not sensitive to those issues!!!
Back to top
View user's profile Send private message
DavidatK

Active Member


Joined: 22 Nov 2005
Posts: 700
Location: Troy, Michigan USA

PostPosted: Wed Jan 17, 2007 1:06 am
Reply with quote

Asif,

What kind of processing are you going to be doing with the rows fetched from the DB2 tables? Are you doing any Table Updates? Give us some basic pseudo code of the processing (reading, writing updating, summarizing etc). I?m sure there are many good suggestions to be found here.
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Wed Jan 17, 2007 2:22 pm
Reply with quote

Ok, I am providing the details below.

My cursor query is:

Code:
SELECT COL1, SUM(COL2), SUM(COL3)
   FROM  <tablename>
   GROUP BY COL1            
 WHERE  COL4 = 'O'   AND      
 COL5 BETWEEN <range>


OPEN of cursor will create a huge result table. I need to FETCH each row and write the content to an output file.

I am not doing any table update.

Thanks,
Asif
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19243
Location: Inside the Matrix

PostPosted: Wed Jan 17, 2007 9:52 pm
Reply with quote

Hello,

For my $.02, you'll be better off with the unload rather than the sql.
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Thu Jan 18, 2007 10:59 am
Reply with quote

oh ok...

but there is also concept of TEMPORARY Tables which I think is used in cases where the result table is large.

Can u shed some light on this...


Thanks,
Asif
Back to top
View user's profile Send private message
DavidatK

Active Member


Joined: 22 Nov 2005
Posts: 700
Location: Troy, Michigan USA

PostPosted: Thu Jan 18, 2007 8:05 pm
Reply with quote

Global Temporary tables can be very useful, but they do have limitations on size dependant on the site setup.

When you say the result table will be large, what are you talking about? How many rows do you expect, and what will the structure be? i.e. how many total bytes will the table take?

I find that I can get more rows in a cursor than I can with a temporary table.

In your example that you posted, it looks like the only variable in the cursor is the ?range?. For each record you read from your input file, the range is going to change forcing a new cursor?

Please explain some more. ( I know that sometimes when I try to explain something, especially to my wife, I think it?s crystal clear because I understand what I?m saying, but she doesn?t understand. I have to explain in basic, complete terms)
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19243
Location: Inside the Matrix

PostPosted: Thu Jan 18, 2007 8:11 pm
Reply with quote

Hello,

Yes, temporary tables can be used. . . .

Keep in mind that if the data selected to populate the temporary table is massive the overhead to insert the selected data into the temporary will also be massive.

Then there will be the overhead to read it back from the temporary table.

If, on the other hand, you unload the data and process it in qsam, it will use less resources directly and indirectly will keep this large process from impacting the rest of the database environment.

While database/sql is our very good friend, it isn't a good answer for everything.
Back to top
View user's profile Send private message
DavidatK

Active Member


Joined: 22 Nov 2005
Posts: 700
Location: Troy, Michigan USA

PostPosted: Thu Jan 18, 2007 8:40 pm
Reply with quote

I agree with you Dick, using an unload of the table(s) can be much more efficient than going against the DB2 table if the percentage of rows used is high.

What I?m having trouble getting my arms around is why the cursor needs to be opened/closed for every input record being processed. If the cursor will result in a different result each time, I?m not sure how he would process a flat file and get the same result. Opening/Closing and processing the flat file for each input record also will incur a heavy resource use.

I?m thinking that if we knew the whole specification, and data structure we could probably come up with a reasonably efficient logic.
Back to top
View user's profile Send private message
William Thompson

Global Moderator


Joined: 18 Nov 2006
Posts: 3156
Location: Tucson AZ

PostPosted: Thu Jan 18, 2007 8:48 pm
Reply with quote

DavidatK wrote:
What I?m having trouble getting my arms around is why the cursor needs to be opened/closed for every input record being processed. If the cursor will result in a different result each time,
If the input file had different criteria for the subsequent select, wouldn't that require the close/open?
Quote:
1) Read a rec from input file
2) For each rec read, query the table and get the result table created. The result table will have multiple rows. So I need to use cursors.
3) In essence, for every record read from file there has to be an OPENing of cursor.
Back to top
View user's profile Send private message
Asif Iqbal

New User


Joined: 17 May 2005
Posts: 27
Location: Pune, India.

PostPosted: Fri Jan 19, 2007 5:28 pm
Reply with quote

Quote:
if we knew the whole specification, and data structure we could probably come up with a reasonably efficient logic


I am presenting the whole issue in a simplified form:

pardon me for the length :-)

1) DB2 table has mothly rev data for each customers. So it has columns like: Cust-ID, MONTH, REV etc. Primary Key: 'Cust-ID + MONTH'.

It's not that for every Cust there are 12 rows in the table. If cust 'A' has done business (REV > 0) in 4 months then there will be only 4 rows for him and so on..

2) Requirement is to prepare report having Revenue totals of last QTR for each cust.

3) My first approach was to use an input file as a list of all Cust-Ids. Read it one-by-one, query the table using the cust-id from file, process the result table- do the totals, and write output.

But this would have required as many OPEN-CLOSE of cursor as there are Cust-Ids in the input file (which is in millions).

4) Other approach was to use below query:

Code:
SELECT CUST_ID, SUM(REV)
   FROM  <tablename>
   GROUP BY CUST_ID             
WHERE MONTH BETWEEN M1 AND M3


M1 and M3 are 1st and 3rd month of last quarter.

Quote:
When you say the result table will be large, what are you talking about? How many rows do you expect, and what will the structure be? i.e. how many total bytes will the table take?


Query would create a result table having millions of rows (= no. of cust-id)
Each row will be of size 52 bytes.


5) Next approach is to use unload file of table. Do some initial SORT. Read it sequentially in the program, perform QTR totals for each cust-ids and write output.

Quote:
I?m not sure how he would process a flat file and get the same result. Opening/Closing and processing the flat file for each input record also will incur a heavy resource use.


This approach would require only 1 open-close of file.

So this is the whole story, I hope I could make myself clear.


Waiting 4 ur replies...
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19243
Location: Inside the Matrix

PostPosted: Fri Jan 19, 2007 8:49 pm
Reply with quote

Hello,

I'd suggest the unload, sort and then matching your list of cust-id's with the unloaded/sorted data.

When you unload and sort the data, use a VERY large block size. While you'll need only open/close of the flat files, using a large block size will dramatically reduce your run time.

This will far outperform the sql alternatives.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts Inserting into table while open selec... DB2 1
No new posts Is SQLCODE -811 possible while fetchi... DB2 1
No new posts Restart logic by using cursor name in... DB2 1
No new posts Seeking Resolution for SQKCODE -991 o... DB2 2
No new posts Multiple rows within Cursor when Coun... DB2 14
Search our Forums:

Back to Top