IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Limit of working-storage and performance


IBM Mainframe Forums -> COBOL Programming
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
pnkumar
Warnings : 2

New User


Joined: 27 Oct 2005
Posts: 24

PostPosted: Thu Jul 02, 2009 5:40 pm
Reply with quote

Hi,

In my program i need to define a working storage table with 2800 bytes long having around 1680 occurrences. In my procedure division i need to access a file, get the data and will be moved into the working storage table once. Later i need to search the table and get the particular record then some of the data will be moved to another working storage variables. My questions here is
1. can i get any working storage limit error
2. Is this way of defining and moving the data to the working storage can it cause any performance issue (time consuming to run the program). Could you please let me know how to measure the performance of running the program with large volume of data.

I know this can be done in different methods but we are planning to change to the above solution if feasible because of the following reason, currently the process is as follows: based on the key(for ex: ORG and DEPT-No is the key) value read the DEPT file and get the record and do some processing for a particular employee, for every key change we need to read the DEPT file which causes more number of DEPT file reads ( because of the flaw in defining the key originally and in production already records are processing and we are not able to rearrange them) hence causes performance issue.

Could any one please help me in this. Thanks in advance for your help.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8697
Location: Dubuque, Iowa, USA

PostPosted: Thu Jul 02, 2009 5:53 pm
Reply with quote

There is a link to manuals at the top of the page. If you bring up the COBOL Language Reference and go to Appendix B, you will find all the compiler limits are presented for your edification.
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Thu Jul 02, 2009 6:02 pm
Reply with quote

See occurs clause

read about search

you will find that you should sort your input before loading it into your table.

This link is for all COBOL documents.
Find your version of COBOL and look in the Language Reference.
Normally, APPENDIX B contains compiler limits.
Back to top
View user's profile Send private message
pnkumar
Warnings : 2

New User


Joined: 27 Oct 2005
Posts: 24

PostPosted: Thu Jul 02, 2009 6:05 pm
Reply with quote

Hi Robert,

Thanks for the reply, i found it, no problem with the limit of working storage for my requirement.

can i have any reply on question 2.

Thanks
Kumar
Back to top
View user's profile Send private message
pnkumar
Warnings : 2

New User


Joined: 27 Oct 2005
Posts: 24

PostPosted: Thu Jul 02, 2009 6:43 pm
Reply with quote

Hi Dick Brenholtz ,

Thanks for your reply. I checked the help on OCCURS clause and SEARCH verb, i am not able to get the details about how the performance will be impacted if we move large volume of data to the working storage table, and searching the large table instead of read random the file and get the particular record details for each key change, currently we are experiencing performance issue with more number of reads to get the details from the file. The following is the extract from one of our tech guy on this.

The statistics for first segment posting jobs are as follows

DMCP - OPENED: DYN INPUT 9924 READS, 0 WRITE
DEPT - OPENED: DYN INPUT 25559843 READS, 0 WRITE
DEMX - OPENED: DYN INPUT 183652 READS, 0 WRITE
DMCR - OPENED: DYN INPUT 391608 READS, 0 WRITE

DEPT file is read more than 25 millions time in each segment of posting which makes the program to execute more time, same is the case with DMCR .

Thanks
Kumar
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Thu Jul 02, 2009 7:16 pm
Reply with quote

number-of-reads - number-of-records = number of i/o's potentially saved by loading file into working-storage table.

each COBOL module can contain x number of working storage.
if you have 1 main and 3 sub-modules (CALLed),
you would have 4 times the amount of working-storage as one module.

not sure which is your driver file: put that in main module storage.
you can then have 3 sub-modules,
each would contain a large table to hold one the other files
and the code necessary to return a pointer (to the occurances involved in the sub module)
[not the data]
to the main module - desired by the driver file.

use of pointers in main module to address items in sub-modules (via linkage)
would mean that the data is only moved twice (2) -
from file to table
and from table to print.

Of course that would mean a large memory requirement for the loadmod,
but figure out how many i/o's you would save.
I have done this in the past and even with a lot of page-swaps,
the savings is substantial.

Basic Flow:
Code:

Mainmodule
 housekeeping
   loads primary file to table
   CALLs Sub-module1 with function to load table
          Sub-module1 loads its file to table
                              return to main module
   CALLs Sub-module2,3,4
 primary process
   find first driver item
   CALLs necessary Submodules with function to find and SET pointer to items required by driver item
   sets address of linkage areas (in main module) to pointer set by submodule to associated sub-module items
   create report
   loop

yes, the main module can address data in a sub-module.
Back to top
View user's profile Send private message
Bill O'Boyle

CICS Moderator


Joined: 14 Jan 2008
Posts: 2501
Location: Atlanta, Georgia, USA

PostPosted: Thu Jul 02, 2009 7:16 pm
Reply with quote

If you can determine ahead of time the amount of storage that you'll need for your table, you could pass that figure to the COBOL program in a simple parm-file. You could then place your table definition in LINKAGE and use the LE Callable Service routine CEEGTST to obtain the correct amount of storage, based upon the parm-file data.

By dynamically allocating the storage, you won't have to over-allocate WS or (and this is bound to happen) under allocate, which would cause the program not to be able to load all the table-entries.

When the STEP or JOB has completed, the dynamic storage is automatically freed by operating system routines.

HTH....

Regards,
Back to top
View user's profile Send private message
pnkumar
Warnings : 2

New User


Joined: 27 Oct 2005
Posts: 24

PostPosted: Fri Jul 03, 2009 4:18 pm
Reply with quote

Hi Dick Brenholtz,

Thanks for your detailed explanation.
I would like to elaborate the current process, the driver file for this is DMCR which is a parameter file having ORG and DEPT and DMEP, employer file consists of employee details like employee number etc., (for ex. 1010000984 where the first 3 digits indicate ORG 101 and the last 3 digits 984 indicates the DEPT, with some logic we could find the DEPT from the employee number (here for ex., the last three digits of the employee number 984 corresponds to DEPT 100, similarly last three digits of the employee number 985 corresponds to DEPT 101 like so on), due to flaw in creating the employee numbers and loading into the DMEP file which is a VSAM Key sequential file the records as as follows:

ORG EMP-NO DEPT
101 1010000984 100
101 1010000985 101
101 1010001984 100
101 1010001985 101

by looking the above data because of the ORG and EMP-NO are only the Key, the DEPT number was not in sorted order. This is already in production so we can't change the numbering system for the entire system now.

Now while processing the first record from DMEP 1010000984 , read the DMCR with key as ORG 101 and DEPT 100 , from that record read another file DEPT with ORG 101 DEPT 100 and DESG 'Developer' if found do further processing else go for next employee number and continue the same process:

Because of the flaw in the employee numbering system for 3rd record it will again read the DEPT file for DEPT 100 details hence causing more number of DEPT file reads, instead of this process as i mentioned above load all the records of DEPT file into the table , for each employee number based on the ORG and DEPT search the table and get the DEPT details for further processing.

So here the question is by loading all the records of DEPT (around 1600 records ) details once into table and search the table instead of doing an I/O operation on DEPT file can we improve the performance of the execution time of the program?

Thanks
Kumar
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> COBOL Programming

 


Similar Topics
Topic Forum Replies
No new posts CLIST - Virtual storage allocation error CLIST & REXX 5
No new posts PD not working for unsigned packed JO... DFSORT/ICETOOL 5
No new posts Def PD not working for unsigned packe... JCL & VSAM 3
No new posts exploiting Z16 performance PL/I & Assembler 2
No new posts CICS vs LE: STORAGE option CICS 0
Search our Forums:

Back to Top