Portal | Manuals | References | Downloads | Info | Programs | JCLs | Master the Mainframes
IBM Mainframe Computers Forums Index
 
Register
 
IBM Mainframe Computers Forums Index Mainframe: Search IBM Mainframe Forum: FAQ Memberlist Usergroups Profile Log in to check your private messages Log in
 

 

Large DB2 Table Performance

 
Post new topic   Reply to topic    IBMMAINFRAMES.com Support Forums -> DB2
View previous topic :: :: View next topic  
Author Message
calspach

New User


Joined: 05 May 2006
Posts: 32
Location: USA

PostPosted: Wed Aug 18, 2010 7:27 am    Post subject: Large DB2 Table Performance
Reply with quote

I am working on a project and need some assistance.  We are developing a new set of DB2 tables that are very large.  The parent table has 225 columns, the main child has 210, with an additional 5 child tables that have between 10 and 15 columns each.  All of this used to reside in a 9500 byte vsam record.  

It was decided that wherever possible, we would keep the programs intact, leaving as input the 9500 byte record.  However, in order to do this, there are various parts of our system where it will be necessary to "de-convert" the table structure back into the vsam record in order to recreate the input file.  We have a total of about 300 million records on this table as well.  We have to create input files of anywhere between 100,000 and 10 million records, using various selection criteria.  We are having a very difficult time doing this in an efficient manner.

Is there anyone out there with some experience in this type of thing that would have some thoughts on how we can do this more efficiently? 
Back to top
View user's profile Send private message

dick scherrer

Site Director


Joined: 23 Nov 2006
Posts: 19270
Location: Inside the Matrix

PostPosted: Wed Aug 18, 2010 8:50 am    Post subject:
Reply with quote

Hello,

Quote:
It was decided that wherever possible, we would keep the programs intact, leaving as input the 9500 byte record.
May need to be re-visited.

Quote:
We have to create input files of anywhere between 100,000 and 10 million records, using various selection criteria.
How many of these are planned?

Quote:
We are having a very difficult time doing this in an efficient manner.
Needs clarification. . .
Back to top
View user's profile Send private message
calspach

New User


Joined: 05 May 2006
Posts: 32
Location: USA

PostPosted: Wed Aug 18, 2010 7:24 pm    Post subject:
Reply with quote

dick scherrer wrote:
Hello,

Quote:
It was decided that wherever possible, we would keep the programs intact, leaving as input the 9500 byte record.
May need to be re-visited.
Yes, that is one of the options that we are looking at, but it would mean recoding around 250-300 programs. Would really like to avoid that if possible.
Quote:

Quote:
We have to create input files of anywhere between 100,000 and 10 million records, using various selection criteria.
How many of these are planned?
We currently have this in 10 places in our system, each for a different selection of records. These would then be used in a line of jobs after each one, so one extract would proces through around 20-40 jobs.

Quote:

Quote:
We are having a very difficult time doing this in an efficient manner.
Needs clarification. . .
The 100,000 record extact takes about an hour, the 10 million takes around 30 hours.. The 100,000 record extract will need to run in our nightly batch cycle in three different places, the additional 3 hours would not be acceptable for our time frame to get the cycle done. The 10 million will run in our Financial cycles, which runs on a weekend, but the 30 hour increase in run time is still not acceptable.

We have looked at all the indexes, looked at the ordering and clustering of our tables, all of this seems to be correct.
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Wed Aug 18, 2010 7:31 pm    Post subject:
Reply with quote

how are you creating your 'files'?

program, db2 utility?

do all of your batch programs have their own vsam i/O or
is this modularized (call a module to perform the i/o)?
Back to top
View user's profile Send private message
dick scherrer

Site Director


Joined: 23 Nov 2006
Posts: 19270
Location: Inside the Matrix

PostPosted: Wed Aug 18, 2010 7:32 pm    Post subject:
Reply with quote

Hello,

Quote:
the 10 million takes around 30 hours..
Hmmm. . . If it takes 24 hours to run the "daily" how long will it take to run the "monthly" icon_smile.gif How long does it take to unload the base/parent table?

Suggest you work with your dba(s) to determine if the extract processes have really inefficent queries or if there are missing key definition to make the process run acceptably.

I pull more than 5 million rows in only a few minutes - during a regular day (not overnight or on a weekend. . .).

Learned long ago that when working with high volume it is usually faster to do the work outside of sql. . . fwiw.
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Wed Aug 18, 2010 7:39 pm    Post subject:
Reply with quote

to get advice on how to make 'your process efficient',
means you need to talk to someone
who also went at this type of conversion back-asswards.

how far along are you in this 'project'?
is there no time to think and try to create a more rational process?

as Dick said, dumping millions of rows is something that usually happens in minutes (or small portions of an hour) not hours.
Back to top
View user's profile Send private message
sqlcode1

Active Member


Joined: 08 Apr 2010
Posts: 578
Location: USA

PostPosted: Wed Aug 18, 2010 8:39 pm    Post subject:
Reply with quote

calspach,
Quote:
Yes, that is one of the options that we are looking at, but it would mean recoding around 250-300 programs. Would really like to avoid that if possible.
If you are not planning to change all or even some programs then why would you even create DB2 tables?
Quote:
The 100,000 record extact takes about an hour, the 10 million takes around 30 hours.. The 100,000 record extract will need to run in our nightly batch cycle in three different places, the additional 3 hours would not be acceptable for our time frame to get the cycle done. The 10 million will run in our Financial cycles, which runs on a weekend, but the 30 hour increase in run time is still not acceptable.

Create partitioned table and use Partition unload to reduce clock time. This comes at a little increase in CPU cost but will save clock time.

From whatever information that you have provided, I would say do a full table unload(s) at the beginning of the cycle and prepare a 1,2 or 3 record layout/files which might serve data at all 10 different places and let these files flow through system.

Thanks,
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic    IBMMAINFRAMES.com Support Forums -> DB2 All times are GMT + 6 Hours
Page 1 of 1

 

Search our Forum:

Similar Topics
Topic Author Forum Replies Posted
No new posts how to send just 10 rows in a CICS sc... Megha Gupta CICS 5 Thu Feb 23, 2017 6:57 pm
No new posts Join of more than one table dibyendumandal11111 DB2 6 Mon Feb 13, 2017 11:36 pm
No new posts BC, BCR, BRC, BRCL performance steve-myers PL/I & Assembler 0 Fri Dec 23, 2016 7:44 am
No new posts SQL query to run through list of valu... Ni3-db2 DB2 14 Wed Dec 14, 2016 9:52 am
No new posts PL/I code tuning/Performance improvement Virendra Shambharkar PL/I & Assembler 4 Mon Dec 05, 2016 11:57 am


Facebook
Back to Top
 
Mainframe Wiki | Forum Rules | Bookmarks | Subscriptions | FAQ | Tutorials | Contact Us