View previous topic :: View next topic
|
Author |
Message |
bracelet
New User
Joined: 20 May 2006 Posts: 3
|
|
|
|
Hi
I have table contains 15 millions rows in 30 partitions. I using db2 7.2 ver.I am running unload utility.it is taking 3hr 30 min to unload into 30 partitions for 15 million. i would like to reduce run time of this job. Any suggestions. pl. i tried to use set degree = 'any'. Is there any way to reduce run time.
Thanks,john
2) The output of the unload i am sorting. how can reduce run time of syncsort. I am using sortwk01- 05 temp datasets. Is there any way to use dynamically allocate sortworks by systems it self. Any suggestion pl.
Thanks,john |
|
Back to top |
|
|
Gurmeet
New User
Joined: 22 Feb 2006 Posts: 46 Location: Pune
|
|
|
|
try to unload 1 partition at a time...it should reduce the run time significantly..
~Gurmeet |
|
Back to top |
|
|
bracelet
New User
Joined: 20 May 2006 Posts: 3
|
|
|
|
it is taking lot more time if i use 1 partition at a time. |
|
Back to top |
|
|
Gurmeet
New User
Joined: 22 Feb 2006 Posts: 46 Location: Pune
|
|
|
|
bracelet,
I have a much bigger table on one of the production environments and we follow the same process(unload 1 partition at a time), it speeds up the process.
if your don't mind may i please know the complete process which you are following, may be i can help you out in a more effective manner.
~Gurmeet |
|
Back to top |
|
|
bracelet
New User
Joined: 20 May 2006 Posts: 3
|
|
|
|
That means you are saying i have to use 30 steps in that unload job.
ineach step use control card unload utility to unload into 1 partiton.that is what you saying. am i right.
Thanks |
|
Back to top |
|
|
Gurmeet
New User
Joined: 22 Feb 2006 Posts: 46 Location: Pune
|
|
|
|
Yes and it has got lot of advantages.
* Less sort work space will be required.
* Unload/load happens at a faster rate.
* If you encounter a problem you can start working with 1 partition rather than restarting the work for whole table space.
I tried this on a tablespace with 20 partitions, the job use to abend after running for 9-10hrs for lack of sort space or some other trivial error, In this case it becomes very difficult to manage such huge sort space and it consumes lot of time.
If we are working on 1 partition at a time it makes life easier wise. |
|
Back to top |
|
|
anamikak
New User
Joined: 10 May 2006 Posts: 64 Location: Singapore
|
|
|
|
Are you using BMC unload utility(unload plus)? If not give it a try, it is much faster and it uses parallel engined to unload data from partitions. Also, there is an option called FILTERPART in Unload plus, that would filter out partitions, if you use the first column of your clustering index which is responsible for partitioning. |
|
Back to top |
|
|
mahsug
New User
Joined: 01 Jul 2003 Posts: 15 Location: Phoenix , AZ
|
|
|
|
Hi,
Going with Gurmeet and Bracelet , try giving multiple unload statements in a single control card , specifying unloads from different partitions ,and specify seperate datasets to each partition and also specify different volumes inorder to avoid redundant I/O. This will make effective use of partitioning parallelism.
ie say
DD1=A.B.C1,DISP=......, VOLUME=ABC1
DD2=A.B.C2,DISP=......, VOLUME=ABC2
DD3=A.B.C3,DISP=......, VOLUME=ABC3
DD4=A.B.C4,DISP=......, VOLUME=ABC4
DD5=A.B.C5,DISP=......, VOLUME=ABC5
and the unload control card would be like
UNLOAD FROM DB1.TS1 PART 1 PUNCHDDN SYSPUNCH UNLDDDN DD1 EBCDIC
UNLOAD FROM DB1.TS1 PART 2 PUNCHDDN SYSPUNCH UNLDDDN DD2 EBCDIC
UNLOAD FROM DB1.TS1 PART 3 PUNCHDDN SYSPUNCH UNLDDDN DD3 EBCDIC
UNLOAD FROM DB1.TS1 PART 4 PUNCHDDN SYSPUNCH UNLDDDN DD4 EBCDIC
UNLOAD FROM DB1.TS1 PART 5 PUNCHDDN SYSPUNCH UNLDDDN DD5 EBCDIC
Hope , this will help your requirement better.
Mahesh |
|
Back to top |
|
|
Hanfur
Active User
Joined: 21 Jun 2006 Posts: 104
|
|
|
|
Why cant use a high performance unload utility like INZUTILB? |
|
Back to top |
|
|
krk_kumar1
New User
Joined: 14 Mar 2006 Posts: 39
|
|
|
|
hi,
You can run the job 20-30 wide by specifing the where clause in your unload based on the partion key. i think this should solve your problem unless u don't want the entire unload to be done in single job. |
|
Back to top |
|
|
leela_krsna
New User
Joined: 09 Aug 2006 Posts: 3
|
|
|
|
Did u try a simultaneous PART-wise UNLOAD utility?
Use an UNLOAD with PART 'n' option to unload specific partition.
or PART n1:n2 to unload consecutive partitions (eg., 6:10 parts)
With this, you must be able to submit n different jobs,
with SHRLEVEL CHANGE ISOLATION UR .
This will reduce the Elapsed Unload time by the number of simultaneous jobs you submit. Ofcourse, simultaneously allocated Storage needs to be considered.
I do a 25 part 1Billion row table UNLOAD this way:
UNLOAD PART 1:5 - Job1
UNLOAD PART 6:10 - Job2
UNLOAD PART 15:20 - Job 3
UNLOAD PART 11:15 - Job 4
UNLOAD PART 21:23 - Job 5
UNLOAD PART 24:25 - Job 6
Trigger jobs 1,2 and 6 once.
6 is very large, so I would allow it to run for full length.
When 1 and 2 are done, bring in 3,4 and then 5.
This way I complete the Unload in around 3-4 hours. |
|
Back to top |
|
|
|