View previous topic :: View next topic
|
Author |
Message |
vinuthna2004
New User
Joined: 30 Dec 2005 Posts: 6
|
|
|
|
We are using BMC UNLOAD utility to unload huge table having minimum 8 million records daily. We wanted to remove the ORDER BY clause from SELECT query, since it is consuming more DB2 resources. And instead we wanted to use SORT program to sort the unloaded file. How many temporary sort work files it can allocated max?
Also there is another table having 91 partitions, i.e i need to write a job that needs to unload 91*8 million records (having 7 primary keys). Then use SORT program to sort on these 7 primary keys . So will the temporary work files will be sufficient to SORT all the table (all 91 partitions)? If i need extra sort work files, then how to specify that? I wanted to know if there is a way to unload all partitions at a time (instead of few and running the job multiple times)? Since maintenance window is only 3 hrs. |
|
Back to top |
|
|
murmohk1
Senior Member
Joined: 29 Jun 2006 Posts: 1436 Location: Bangalore,India
|
|
|
|
Regarding SyncSort Technical issues, Alissa Margulies (SyncSort Mainframe Product Services) suggestions might come handy for you. |
|
Back to top |
|
|
William Thompson
Global Moderator
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
|
|
|
|
Syncsort handles large volumes, check the manual....
If you need a manual a phone call or email from the company's email should do the trick...
Check out this link for more information.... |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Typically, 8 million records is not such a large volume. What is the lrecl of the data to be sorted? |
|
Back to top |
|
|
vinuthna2004
New User
Joined: 30 Dec 2005 Posts: 6
|
|
|
|
LRECL is 209. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
My apologies (i missed the 8 million times 91)
That would make each file size around 1.7 billion chars. You will get over 55,000 bytes per track, so that would be just over 30,000 tracks which is a bit over 2000 cylendars. There may be a difference due to dasd model used.
Will you be able to do some timing tests? How long does it take to perform the unload and sort for 1 partition?
If you talk with your storage management people, they can advise on whether this would need special scheduling or there would typically be sufficient transient space available. |
|
Back to top |
|
|
vinuthna2004
New User
Joined: 30 Dec 2005 Posts: 6
|
|
|
|
Thanks for your valuable suggestion. It took 10 minutes to do this for each partition when we did volumn testing. Total of 90 minutes it took to unload, sort and load all the partitions.
We are planning to make changes to online program to handle unavailability of DB2 tables during non-maintenance time only for particular day. Since we have 5 tables to do the same process in production.
It was awesome, when we dropped the NPI and replaced NPI with these new tables. With NPI the LOAD and UNLOAD took 9 hrs to complete the task. We dropped the NPI and replaced them with new tables. When we replaced NPI with new tables (having primary keys as NPI fields in this new tables), it took 10 min to complete the same task.
As per your suggestion D.Sch, I need to follow up with storage group, to unload all existing data from 91 partitions and load into new tables having 91 partitions (that replaced NPI). If that is not possible, then atleast they can provide me an estimate of how many records i can sort in one run.
Thanks a lot!
Vinu |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
You're welcome - good luck |
|
Back to top |
|
|
|