View previous topic :: View next topic
|
Author |
Message |
sunnybunny
New User
Joined: 29 Nov 2005 Posts: 49
|
|
|
|
Hello,
I have the following requirement. Could anyone please help me on this.
I have an sequential input file with 100 records. I should read the first record from the input file.
With that record, I have to read the db2 database by passing key as one of the input field.
In database it will have multiple entries for that record (but at max 5 entries).
I should read all those entries from the database and store it an array and have to do some processing.
After that I have to read the second record from input file and with that record I have to read the databse again.
This process has to be continued until eof.
ex: input file
100
200
300
database
100 vvvv
100 cccc
100 dddd
200 yyyy
200 cccc
300 dddd
300 ssss
300 fghh
300 kkkk
The question what I have here is, in this case for each and every record from input file i should open & close the cursor multiple
times which will cause to performance issue.
Is there any alternative to do this? Please help me on this.
Thanks
Sunny. |
|
Back to top |
|
|
Escapa
Senior Member
Joined: 16 Feb 2007 Posts: 1399 Location: IL, USA
|
|
|
|
Dump the first input file records in temporary table and join this with other table.. |
|
Back to top |
|
|
Steve Davies
New User
Joined: 15 Oct 2009 Posts: 32 Location: UK
|
|
|
|
Do you really think you'll have a performance problem reading 100 records and opening a cursor for each one??????? !!!!!!!!!!! Assuming your access to the table is via a suitable index (i.e. the key from the input file 100, 200, 300, what ever that field is), then it hardly seems like it's going to cause trouble! |
|
Back to top |
|
|
GuyC
Senior Member
Joined: 11 Aug 2009 Posts: 1281 Location: Belgium
|
|
|
|
it depends,
if you have a 100 records in the file
but 10.000.000 rows in the table, with an index on col1
=> it is probably best to re-open the query.
if you have 5000 rows in the table
you might consider opening the cursor once in the beginning of the program without condition on col1 but ordered by col1
and code some kind of match-merge program. |
|
Back to top |
|
|
sunnybunny
New User
Joined: 29 Nov 2005 Posts: 49
|
|
|
|
Thanks for your replies.
Sorry to say that, example purpose only I mentioned that input file will contain 100 records. Actually input file will have around 10 lakh records.
But for each record in input file, databse will contain at max 5 rows.
Thanks
Sunny |
|
Back to top |
|
|
Steve Davies
New User
Joined: 15 Oct 2009 Posts: 32 Location: UK
|
|
|
|
OK, lakh = 100,000 correct? well there are a few things you could do
1) Give your program a go as you describe it in your post, as you don't actually know if there is a performance problem with it.
If there is a problem, you could,
2) Stream your batch job so you have x number of occurances of the job running. You'll need to split your input file in to x number of files. They can all run together then. Watch out for contention between streams though.
3) Unload the table and ensure it's in the same sort order as the input file and read both sequentially doing a match on the table unload file. This assumes you wont need to update the table.
No doubt there are many more things you can do, but it's a bit difficult not knowing the processing the program is doing.
No doubt other people will also post other ideas. |
|
Back to top |
|
|
sunnybunny
New User
Joined: 29 Nov 2005 Posts: 49
|
|
|
|
Thanks Steve. |
|
Back to top |
|
|
|