View previous topic :: View next topic
|
Author |
Message |
Munish Singla
New User
Joined: 18 Jul 2007 Posts: 21 Location: kolkata
|
|
|
|
I have a query...
In our production system there are some load tables which gets populated on daily bases with thousands of rows. these tables doesn't have indexes defined on them. Because cost of creating indexes can be more than reading the tables. These tables are read only once.
Can we improve the performance of these tables while reading?
Thanks in advance |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
You have not posted enough information for us to make suggestions.
Quote: |
Because cost of creating indexes can be more than reading the tables |
Possibly. . .
Quote: |
These tables are read only once. |
Are the tables read sequentially 1 time? If there was some kind of "key" might the records be read as needed instead of reading the entire table?
If you describe the processing done with these tables, we may be able to offer some suggestions. |
|
Back to top |
|
|
Munish Singla
New User
Joined: 18 Jul 2007 Posts: 21 Location: kolkata
|
|
|
|
These tables are read only once. But because of large data reading is slow.
No indexes, no keys are defines on these tables. These tables are just replica of flat files and read on the basis on file-id
Let me know if this info helps |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
If the tables are to be read only once sequentially and they are merely replicas of existing sequential files, why take the time to load the data into database tables and then read it again only once (sequentially)?
Seems like it would be much less overhead to simply read the data in the qsam file(s). |
|
Back to top |
|
|
Munish Singla
New User
Joined: 18 Jul 2007 Posts: 21 Location: kolkata
|
|
|
|
I don't know . This is running in production from last 4 years in the same way and I am new to this project. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
I'd suggest spending some time researching how the process came to be implemented this way.
You might be able to find someone who was part of the imiplementation or who later worked on it and knows some of the history of how this came to be.
It might be interesting to run a test of the same process using the sequential files and cokmparing the time needed and the output files/reports. |
|
Back to top |
|
|
Munish Singla
New User
Joined: 18 Jul 2007 Posts: 21 Location: kolkata
|
|
|
|
Actually data is validated from flat files and put in to the load tables.
So load tables are basically used for validation process.
So is there any way out to improve the performance. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
So load tables are basically used for validation process. |
If you post the code that works with the tables, we may have suggestions. |
|
Back to top |
|
|
|