IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

reduce run time for large unload table


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
bracelet

New User


Joined: 20 May 2006
Posts: 3

PostPosted: Sun May 21, 2006 6:07 pm
Reply with quote

Hi
I have table contains 15 millions rows in 30 partitions. I using db2 7.2 ver.I am running unload utility.it is taking 3hr 30 min to unload into 30 partitions for 15 million. i would like to reduce run time of this job. Any suggestions. pl. i tried to use set degree = 'any'. Is there any way to reduce run time.

Thanks,john

2) The output of the unload i am sorting. how can reduce run time of syncsort. I am using sortwk01- 05 temp datasets. Is there any way to use dynamically allocate sortworks by systems it self. Any suggestion pl.
Thanks,john
Back to top
View user's profile Send private message
Gurmeet

New User


Joined: 22 Feb 2006
Posts: 46
Location: Pune

PostPosted: Tue May 23, 2006 10:42 pm
Reply with quote

try to unload 1 partition at a time...it should reduce the run time significantly..

~Gurmeet
Back to top
View user's profile Send private message
bracelet

New User


Joined: 20 May 2006
Posts: 3

PostPosted: Thu May 25, 2006 4:17 am
Reply with quote

it is taking lot more time if i use 1 partition at a time.
Back to top
View user's profile Send private message
Gurmeet

New User


Joined: 22 Feb 2006
Posts: 46
Location: Pune

PostPosted: Thu May 25, 2006 4:40 am
Reply with quote

bracelet,

I have a much bigger table on one of the production environments and we follow the same process(unload 1 partition at a time), it speeds up the process.

if your don't mind may i please know the complete process which you are following, may be i can help you out in a more effective manner.

~Gurmeet
Back to top
View user's profile Send private message
bracelet

New User


Joined: 20 May 2006
Posts: 3

PostPosted: Sat Jun 03, 2006 2:44 pm
Reply with quote

That means you are saying i have to use 30 steps in that unload job.
ineach step use control card unload utility to unload into 1 partiton.that is what you saying. am i right.
Thanks
Back to top
View user's profile Send private message
Gurmeet

New User


Joined: 22 Feb 2006
Posts: 46
Location: Pune

PostPosted: Sun Jun 04, 2006 5:19 am
Reply with quote

Yes and it has got lot of advantages.

* Less sort work space will be required.
* Unload/load happens at a faster rate.
* If you encounter a problem you can start working with 1 partition rather than restarting the work for whole table space.

I tried this on a tablespace with 20 partitions, the job use to abend after running for 9-10hrs for lack of sort space or some other trivial error, In this case it becomes very difficult to manage such huge sort space and it consumes lot of time.

If we are working on 1 partition at a time it makes life easier wise.
Back to top
View user's profile Send private message
anamikak

New User


Joined: 10 May 2006
Posts: 64
Location: Singapore

PostPosted: Wed Jun 07, 2006 4:04 pm
Reply with quote

Are you using BMC unload utility(unload plus)? If not give it a try, it is much faster and it uses parallel engined to unload data from partitions. Also, there is an option called FILTERPART in Unload plus, that would filter out partitions, if you use the first column of your clustering index which is responsible for partitioning.
Back to top
View user's profile Send private message
mahsug

New User


Joined: 01 Jul 2003
Posts: 15
Location: Phoenix , AZ

PostPosted: Thu Jun 22, 2006 12:23 pm
Reply with quote

Hi,

Going with Gurmeet and Bracelet , try giving multiple unload statements in a single control card , specifying unloads from different partitions ,and specify seperate datasets to each partition and also specify different volumes inorder to avoid redundant I/O. This will make effective use of partitioning parallelism.

ie say

DD1=A.B.C1,DISP=......, VOLUME=ABC1
DD2=A.B.C2,DISP=......, VOLUME=ABC2
DD3=A.B.C3,DISP=......, VOLUME=ABC3
DD4=A.B.C4,DISP=......, VOLUME=ABC4
DD5=A.B.C5,DISP=......, VOLUME=ABC5

and the unload control card would be like

UNLOAD FROM DB1.TS1 PART 1 PUNCHDDN SYSPUNCH UNLDDDN DD1 EBCDIC
UNLOAD FROM DB1.TS1 PART 2 PUNCHDDN SYSPUNCH UNLDDDN DD2 EBCDIC
UNLOAD FROM DB1.TS1 PART 3 PUNCHDDN SYSPUNCH UNLDDDN DD3 EBCDIC
UNLOAD FROM DB1.TS1 PART 4 PUNCHDDN SYSPUNCH UNLDDDN DD4 EBCDIC
UNLOAD FROM DB1.TS1 PART 5 PUNCHDDN SYSPUNCH UNLDDDN DD5 EBCDIC


Hope , this will help your requirement better.

Mahesh
Back to top
View user's profile Send private message
Hanfur

Active User


Joined: 21 Jun 2006
Posts: 104

PostPosted: Fri Jun 30, 2006 4:35 pm
Reply with quote

Why cant use a high performance unload utility like INZUTILB?
Back to top
View user's profile Send private message
krk_kumar1

New User


Joined: 14 Mar 2006
Posts: 39

PostPosted: Mon Jul 03, 2006 6:57 pm
Reply with quote

hi,
You can run the job 20-30 wide by specifing the where clause in your unload based on the partion key. i think this should solve your problem unless u don't want the entire unload to be done in single job.
Back to top
View user's profile Send private message
leela_krsna

New User


Joined: 09 Aug 2006
Posts: 3

PostPosted: Wed Aug 09, 2006 3:38 am
Reply with quote

Did u try a simultaneous PART-wise UNLOAD utility?
Use an UNLOAD with PART 'n' option to unload specific partition.
or PART n1:n2 to unload consecutive partitions (eg., 6:10 parts)

With this, you must be able to submit n different jobs,
with SHRLEVEL CHANGE ISOLATION UR .
This will reduce the Elapsed Unload time by the number of simultaneous jobs you submit. Ofcourse, simultaneously allocated Storage needs to be considered.

I do a 25 part 1Billion row table UNLOAD this way:

UNLOAD PART 1:5 - Job1
UNLOAD PART 6:10 - Job2
UNLOAD PART 15:20 - Job 3
UNLOAD PART 11:15 - Job 4
UNLOAD PART 21:23 - Job 5
UNLOAD PART 24:25 - Job 6

Trigger jobs 1,2 and 6 once.
6 is very large, so I would allow it to run for full length.
When 1 and 2 are done, bring in 3,4 and then 5.
This way I complete the Unload in around 3-4 hours.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts Load new table with Old unload - DB2 DB2 6
No new posts Pulling a fixed number of records fro... DB2 2
No new posts To get the the current time DFSORT/ICETOOL 13
No new posts RC query -Time column CA Products 3
Search our Forums:

Back to Top