I need your suggestion/comments for increasing a vsam file record length. Present scenario is like below -
In our life insurance admin application we have the masterfile of 16000 bytes. This file is allocated in cics and all the updation is done by online only. Very few report jobs use this file in batch window. Its trailer/segment based file. Now every year a new trailer being added in this file and now it has reached the maximum record length and failing to process the policy. This number is increasing day by day.
This application was developped as a part of a product. We thought of writing the new trailers to another file and changing the processing logic accordingly. The way the IO processing has been coded it is very difficult to understand as there is no documentation is available. Also there are many background long running transaction which updates the master file as well. We tried to do some sort of poc on that but no luck.
So we are thinking of increasing the record length. Though it will be a change across almost all the programs but that will be a doable thing from our side. We are thinking to make it as 35000 bytes. We dont have any idea how much the performance will be degraded for this. Would like to know your comments what are the things we need to consider as far as performance is considered. Will there be any other aspect we need to consider. Your comments/guidance is highly appreciated. We have approx 150,000 records. No new records will not be inserted in future. Thank You!
Joined: 06 Jun 2008 Posts: 8188 Location: East Dubuque, Illinois, USA
Have you talked to your site support group? There are many issues -- for example, if the VSAM data set is accessed via LSR pool in CICS, increasing the record length may change which LSR pool can be used with the data set. Only someone in your site support group can determine whether or not the data set uses LSR and if so which LSR pool to use after the change.
@ Marso - We thought of that approach initially. But after having a discussion with business, they are not agreed to delete the oldest year data. Its the TAX trailer data which is being added every year.
We need to maintain the tax data for every years. Shifting the oldest trailer in a different file was another option. But after doning some kind of POC we did not get expected result. So we are thinking of expanding this file. Thank You!
Joined: 30 Nov 2013 Posts: 585 Location: The Universe
Your proposed record size is too large. See the discussion of the RECORDSIZE parameter in the "DEFINE CLUSTER" chapter in the DFSMS AMS for Catalogs manual for your z/OS release.
It is very difficult to quantify performance issues for this proposed change. It very much depends on how the data is referenced. If the data is referenced as a single record, by the record's key, it is unlikely that any performance change will be observed. However, if the data is processed sequentially, and all the data has been expanded, the time required to process the data set will substantially increase, as will the data storage requirements for the data. However, if you can make a business case for this change, the increased processing time will be accepted, as will the cost to store the data. Note, too, that probably all programs that process this data will have to be changed; the cost of this change must be part of the presentation for justifying the business case for this change.