IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Large DB2 Table Performance


IBM Mainframe Forums -> DB2
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
calspach

New User


Joined: 05 May 2006
Posts: 32
Location: USA

PostPosted: Wed Aug 18, 2010 7:27 am
Reply with quote

I am working on a project and need some assistance.  We are developing a new set of DB2 tables that are very large.  The parent table has 225 columns, the main child has 210, with an additional 5 child tables that have between 10 and 15 columns each.  All of this used to reside in a 9500 byte vsam record.  

It was decided that wherever possible, we would keep the programs intact, leaving as input the 9500 byte record.  However, in order to do this, there are various parts of our system where it will be necessary to "de-convert" the table structure back into the vsam record in order to recreate the input file.  We have a total of about 300 million records on this table as well.  We have to create input files of anywhere between 100,000 and 10 million records, using various selection criteria.  We are having a very difficult time doing this in an efficient manner.

Is there anyone out there with some experience in this type of thing that would have some thoughts on how we can do this more efficiently? 
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Aug 18, 2010 8:50 am
Reply with quote

Hello,

Quote:
It was decided that wherever possible, we would keep the programs intact, leaving as input the 9500 byte record.
May need to be re-visited.

Quote:
We have to create input files of anywhere between 100,000 and 10 million records, using various selection criteria.
How many of these are planned?

Quote:
We are having a very difficult time doing this in an efficient manner.
Needs clarification. . .
Back to top
View user's profile Send private message
calspach

New User


Joined: 05 May 2006
Posts: 32
Location: USA

PostPosted: Wed Aug 18, 2010 7:24 pm
Reply with quote

dick scherrer wrote:
Hello,

Quote:
It was decided that wherever possible, we would keep the programs intact, leaving as input the 9500 byte record.
May need to be re-visited.
Yes, that is one of the options that we are looking at, but it would mean recoding around 250-300 programs. Would really like to avoid that if possible.
Quote:

Quote:
We have to create input files of anywhere between 100,000 and 10 million records, using various selection criteria.
How many of these are planned?
We currently have this in 10 places in our system, each for a different selection of records. These would then be used in a line of jobs after each one, so one extract would proces through around 20-40 jobs.

Quote:

Quote:
We are having a very difficult time doing this in an efficient manner.
Needs clarification. . .
The 100,000 record extact takes about an hour, the 10 million takes around 30 hours.. The 100,000 record extract will need to run in our nightly batch cycle in three different places, the additional 3 hours would not be acceptable for our time frame to get the cycle done. The 10 million will run in our Financial cycles, which runs on a weekend, but the 30 hour increase in run time is still not acceptable.

We have looked at all the indexes, looked at the ordering and clustering of our tables, all of this seems to be correct.
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Wed Aug 18, 2010 7:31 pm
Reply with quote

how are you creating your 'files'?

program, db2 utility?

do all of your batch programs have their own vsam i/O or
is this modularized (call a module to perform the i/o)?
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Wed Aug 18, 2010 7:32 pm
Reply with quote

Hello,

Quote:
the 10 million takes around 30 hours..
Hmmm. . . If it takes 24 hours to run the "daily" how long will it take to run the "monthly" icon_smile.gif How long does it take to unload the base/parent table?

Suggest you work with your dba(s) to determine if the extract processes have really inefficent queries or if there are missing key definition to make the process run acceptably.

I pull more than 5 million rows in only a few minutes - during a regular day (not overnight or on a weekend. . .).

Learned long ago that when working with high volume it is usually faster to do the work outside of sql. . . fwiw.
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Wed Aug 18, 2010 7:39 pm
Reply with quote

to get advice on how to make 'your process efficient',
means you need to talk to someone
who also went at this type of conversion back-asswards.

how far along are you in this 'project'?
is there no time to think and try to create a more rational process?

as Dick said, dumping millions of rows is something that usually happens in minutes (or small portions of an hour) not hours.
Back to top
View user's profile Send private message
sqlcode1

Active Member


Joined: 08 Apr 2010
Posts: 577
Location: USA

PostPosted: Wed Aug 18, 2010 8:39 pm
Reply with quote

calspach,
Quote:
Yes, that is one of the options that we are looking at, but it would mean recoding around 250-300 programs. Would really like to avoid that if possible.
If you are not planning to change all or even some programs then why would you even create DB2 tables?
Quote:
The 100,000 record extact takes about an hour, the 10 million takes around 30 hours.. The 100,000 record extract will need to run in our nightly batch cycle in three different places, the additional 3 hours would not be acceptable for our time frame to get the cycle done. The 10 million will run in our Financial cycles, which runs on a weekend, but the 30 hour increase in run time is still not acceptable.

Create partitioned table and use Partition unload to reduce clock time. This comes at a little increase in CPU cost but will save clock time.

From whatever information that you have provided, I would say do a full table unload(s) at the beginning of the cycle and prepare a 1,2 or 3 record layout/files which might serve data at all 10 different places and let these files flow through system.

Thanks,
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> DB2

 


Similar Topics
Topic Forum Replies
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts Load new table with Old unload - DB2 DB2 6
No new posts Pulling a fixed number of records fro... DB2 2
No new posts Multiple table unload using INZUTILB DB2 2
No new posts Check data with Exception Table DB2 0
Search our Forums:

Back to Top