Portal | Manuals | References | Downloads | Info | Programs | JCLs | Master the Mainframes
IBM Mainframe Computers Forums Index
 
Register
 
IBM Mainframe Computers Forums Index Mainframe: Search IBM Mainframe Forum: FAQ Memberlist Usergroups Profile Log in to check your private messages Log in
 

 

how to increase the efficiency of processing

 
Post new topic   Reply to topic    IBMMAINFRAMES.com Support Forums -> COBOL Programming
View previous topic :: :: View next topic  
Author Message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Wed Oct 15, 2008 5:06 pm    Post subject: how to increase the efficiency of processing
Reply with quote

hi,
i have 2 files, one flat and another a KSDS.
In flat file am having around 90k records and has LRECL as 400, amoung which i have a table for stock nos which occurs upto max of 30 times.
now the objective is to create seperate record for each stock no along with other fields.
eg:
input- abcd3101efg102hij103
4th field gives the no of stock no's present which is 3 in this case.
desired output in KSDS-
abcd101efghij
abcd102efghij
abcd103efghij
Wat i have done is reading the flat file, moving the required fields into the key field of KSDS which i have opened in I-O mode,and reading the KSDS based on this key.If valid key then am updating (using rewrite)
else am write the new record.
but with this method even though am getting the desired o/p ,the time taken is around 20-25mins which i feel is huge. could anyone suggest so as to how can i increase the efficiency?
Back to top
View user's profile Send private message

expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8593
Location: Back in jolly old England

PostPosted: Wed Oct 15, 2008 5:11 pm    Post subject:
Reply with quote

Take a think about it, 90,000 records in 20 minutes is about 4,500 records per minute, which is about 75 records per second.

You read one file extract info from that then perform a read on a second file and check the results, and then determine further processing on the outcome of the check and then perform that process, 75 times a second.

What's so huge about the timescales here ?
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 7904
Location: Bellevue, IA

PostPosted: Wed Oct 15, 2008 5:12 pm    Post subject:
Reply with quote

Look at the definition of the VSAM file, look at the CI and CA splits, check the JCL to ensure the vSAM buffering is high enough to support your requirements, run the program through STROBE or another run-time analysis tool, accept that 20-25 minutes isn't necessarily a bad amount of time for the job. With the limited amount of information you've provided, all we can provide are generic answers.
Back to top
View user's profile Send private message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Wed Oct 15, 2008 5:21 pm    Post subject:
Reply with quote

Hi expat,
Its true that processing 75 times a second is not a bad efficiency.
But the thing is that during test run am point to a flat file, but the actual requirement is to point to a gdg base which has around 90 generations.
Hence am worried in meeting this requirement.
icon_sad.gif
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8593
Location: Back in jolly old England

PostPosted: Wed Oct 15, 2008 5:51 pm    Post subject:
Reply with quote

Taking a second look at this, and considering the points raised by Robert, I would certainly make sure that the input file is sorted efficiently by the KSDS key to ensure that the same CI is not loaded, updated, overwritten in storage and then loaded and updated once again.

Increasing the buffer numbers may help or may extend the elapsed time, as I see that your program reads a record and creates up to 30 KSDS keys. These keys could theoretically be in a different CI from each other, and would need a BUFND of at least 30, which considerably increases the amount of virtual storage demanded by the job, which may then in turn cause paging in/out to occur. It is a fine line between the right number of buffers and too many / too few buffers and the impact that this has.

Maybe it might be better to create all of the KSDS keys in one program, sort them by absolute key value and then perform the updates / inserts. This way you will be certain that no CI will ever be overwritten in storage and then reloaded at a later time.

As suggested a run time analysis tool would certainly give you a lot more additional information.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 7904
Location: Bellevue, IA

PostPosted: Wed Oct 15, 2008 6:12 pm    Post subject:
Reply with quote

Quote:
If it's true that we are here to help others,
then what exactly are the others here for ?
Hey, expat, is the answer "buying rounds"?

I agree with expat -- this is one of those cases where splitting the processing may speed up the overall throughput. I hope the CI size on the VSAM file is pretty small -- with this kind of processing you really don't want a big CI size. And how many records in each generation of the GDG? That could be key.

Once concern, though, is whether the processing plan is to actually run through the entire GDG base every time. If so, with 90 generations, you're duplicating an awful lot of processing for records already in the file! Surely a better design can be developed.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8593
Location: Back in jolly old England

PostPosted: Wed Oct 15, 2008 6:22 pm    Post subject:
Reply with quote

Robert Sample wrote:
Quote:
If it's true that we are here to help others,
then what exactly are the others here for ?
Hey, expat, is the answer "buying rounds"?

Certainly one that I would consider icon_lol.gif
Back to top
View user's profile Send private message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Wed Oct 15, 2008 6:33 pm    Post subject:
Reply with quote

Yes expat, you are right. For one particular record read from flat file i may get upto 30 keys.
As per your suggestion, my understanding is to read the flat file, generate the keys write it into another flat file, next stepis to sort on keys followed by a 'REPRO' to a KSDS.
Am i right?
I also tried tracing the flow of the prog, using expeditor and the flow was fine as per my knowledge.
So moving forward, can you suggest me a method to determine the CI size?Also could you please suggest, where to include the BUFNO?

[/quote]
Back to top
View user's profile Send private message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Wed Oct 15, 2008 6:37 pm    Post subject:
Reply with quote

Hi Robert,
Quote:
And how many records in each generation of the GDG?
Almost all GDG's have around 90k to 100k records.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8593
Location: Back in jolly old England

PostPosted: Wed Oct 15, 2008 6:38 pm    Post subject:
Reply with quote

If you do not need the data that currently exists on the KSDS file as input to your file updates, then that should work.

However to cut processing even more, why not sort out directly to the KSDS ?
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 7904
Location: Bellevue, IA

PostPosted: Wed Oct 15, 2008 6:44 pm    Post subject:
Reply with quote

IDCAMS LISTCAT command for the VSAM cluster will tell you the CI size as well as splits. In your batch JCL on the VSAM DD statement, include AMP=('BUFND=??,BUFNI=??') to set the data and index component buffers. For the flat file setting BUFNO=30 might speed up processing as well -- at a cost of memory (hopefully your machine isn't memory constrained).
Back to top
View user's profile Send private message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Wed Oct 15, 2008 6:50 pm    Post subject:
Reply with quote

The job should run daily and i should have the entire months records in the KSDS. So am specifing DISP=MOD.
So if todays data contains a duplicate key, i guess, i wont be able to write(Update) into KSDS.In that case shud i go for a cobol code again?
Please correct me if i am wrong.
Back to top
View user's profile Send private message
expat

Global Moderator


Joined: 14 Mar 2007
Posts: 8593
Location: Back in jolly old England

PostPosted: Wed Oct 15, 2008 6:53 pm    Post subject:
Reply with quote

DISP=SHR will suffice for a KSDS irrespective of the operation being performed.

Do you mean if the key already exists and you want to update the record, I believe that you can still replace the record if the same key exists using SORT.
Back to top
View user's profile Send private message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Wed Oct 15, 2008 7:40 pm    Post subject:
Reply with quote

Quote:
Do you mean if the key already exists and you want to update the record, I believe that you can still replace the record if the same key exists using SORT

yes.
Expat, can you please elaborate on this a bit?
I guess , i cant give just SORT FIELDS =COPY.
(The KSDS will be having the previous days data . Todays data might have the same keys and mere COPY into KSDS would throw an error code U016.)
well should i include any condition like SUM FIELDS ?
Back to top
View user's profile Send private message
Terry Heinze

JCL Moderator


Joined: 14 Jul 2008
Posts: 1238
Location: Richfield, MN, USA

PostPosted: Wed Oct 15, 2008 11:26 pm    Post subject:
Reply with quote

Agree with everything so far and in addition:
If your "not found" read attempts exceed your "found" attempts by quite a bit, attempt to write a new record first, instead of looking for an existing one. That would save a little I/O, but the other suggestions are better.
Back to top
View user's profile Send private message
anand tr

New User


Joined: 12 Aug 2008
Posts: 41
Location: chennai

PostPosted: Thu Oct 16, 2008 10:00 am    Post subject:
Reply with quote

can anyone suggest me so that i can go ahead writing/updating(in case of duplicate keys) KSDS using SORT?
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic    IBMMAINFRAMES.com Support Forums -> COBOL Programming All times are GMT + 6 Hours
Page 1 of 1

 

Search our Forum:

Similar Topics
Topic Author Forum Replies Posted
No new posts Run stats processing on zIIP Engine Virendra Shambharkar DB2 9 Fri Oct 14, 2016 10:24 am
No new posts DB2 Streaming Batch Processing Problem Manshadi DB2 4 Sat Sep 24, 2016 12:14 pm
No new posts Increase the screen size after split mistah kurtz TSO/ISPF 2 Fri Sep 02, 2016 6:39 pm
No new posts Extracting Information from DD Statem... Charles Wolters All Other Mainframe Topics 7 Wed Apr 13, 2016 10:21 pm
No new posts Optimal code to check the efficiency prino PL/I & Assembler 1 Fri Jan 15, 2016 12:05 am


Facebook
Back to Top
 
Mainframe Wiki | Forum Rules | Bookmarks | Subscriptions | FAQ | Tutorials | Contact Us