View previous topic :: View next topic
|
Author |
Message |
bibek24
New User
Joined: 14 Aug 2007 Posts: 35 Location: Hyderabad
|
|
|
|
Hi All,
I have a table with some data and I want to make use of the data from the table in my cobol program. Just wanted to know performance wise which would be better. Could you please advise?
1) For each input record, making a DB2 call and querying the table and processing in the program. Lets says my input has 100 records, so there would be 100 DB2 calls to fetch the contents from the table and process it further in the program
2) Fetch the table contents using a cursor and load into a cobol internal table first. And for each input record, search the cobol internal table and further process it in the program. Here the 100 calls are being reduced to only 1 call by cusrsor but again the internal table has to be searched for 1000 input records.
Which is better performance wise? I don't have time to check the performance because any approach I opt for, I will continue with that as there is a time constraint. So just wanted to seek your suggestions. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
I'd go for understandability and maintainability. Accessing the data as you need it is much more flexible. If you are storing data in a Cobol table, there is always some limit you have to impose - and then what.
Unless you have enormous amounts of data, I don't think you should worry about the performance of one method over the other.
If you do have enormous amounts of data - you have to make the time to do the analysis, at tne end of the day you have to "mock-up" both methods with reasonable volumes and under your site/systme-specific situation.
If you make the program(s) easy to understand and change then your support/maintenance people will thank you for it. Well, probably not in so many words, but they'll prefer changes to your programs over some other piece of rubbish which uses "cool" techniques, probably wrongly, and which takes five times the amount of effort to change and test and delays production for four hours when it abends (which is three or four times a year). |
|
Back to top |
|
|
Jose Mateo
Active User
Joined: 29 Oct 2010 Posts: 121 Location: Puerto Rico
|
|
|
|
Good day to all!
First, I agree with Bill but your concern is performance. If you could do multi-fetch rows with the your version of DB2. Then option 1 with mult-fetch rows will be the best. Google for DB2 multi-fetch rows. Otherwise I will have to go with option number 2. |
|
Back to top |
|
|
Marso
REXX Moderator
Joined: 13 Mar 2006 Posts: 1353 Location: Israel
|
|
|
|
Here are a few points to consider:- The size of your input and the size of your table are important criteria:
If you have a small input and a large table, it will most certainly not be worth.
On the contrary, if you have a very large input and a small table, it may be well worth.
- As Bill said, Cobol tables are limited in size and this might be a problem.
- Do you know which percentage of the table will be actually used?
- Another thing to determine is the number of time the same row will be fetched from the table.
Can you sort your input file on the field used for DB2 select, and actually do a SELECT only when the value changes?
|
|
Back to top |
|
|
|