I am getting the below two compilation error.
IGYDS1267-S THE SIZE OF THE "WORKING-STORAGE SECTION" EXCEEDED THE COMPILER LIMIT OF 128 MEGABYTES OF MEMORY. EXECUTION RESULTS ARE UNPREDICTABLE.
IGYGR1478-E THE SIZE OF THE "WORKING-STORAGE SECTION" EXCEEDED THE LIMIT OF 16,777,215 BYTES ALLOWED UNDER THE "RENT" AND "DATA(24)" COMPILER OPTIONS. THE "DATA(31)" OPTION WAS ASSUMED.
I know that my working-storage size is going beyond limit because I am using two arrays(working-storage table) of 50000 records each and each record contains 165 chars.
It is almost impossible for me to get rid of this 2 tables.
Please help me in this by suggesting any way to increase our work-storage area and to remove the above 2 errors.
rereading the topic
what happened when in both cases You/Your support forced DATA(31)
after all unless my arithmetic is completely FUBARed
50000 times 165 gives 8250000 which is a bit less than 8388608 which are 8MB
and two of them should fit nicely in 128Mbytes (134217728 ) less a few control bytes
unless You made a typo and a zero is missing somewhere ?
so while it is ok to fail in the second case, it should not in the first, look around for other symptoms
if You made a typo then sadly all You can do is review the program design
ENrico,
Yes, It was a typo. There are 2 arrays(working-storage table) of 500000 records each and each record contains 165 chars.
dbzTHEdinosauer,
Thanks for your reply..but i would appreciate if you can elaborate more on it.
What i understood is you want me to use 3 modules-primary,A & B.
In that case why primary requires Linkage Section.
Joined: 06 Jun 2008 Posts: 8697 Location: Dubuque, Iowa, USA
Quote:
It is almost impossible for me to get rid of this 2 tables.
Please help me in this by suggesting any way to increase our work-storage area and to remove the above 2 errors.
Dick is trying to find a way to break up the tables so they are in separate programs and hence will fit into the 128M limit. Since you CANNOT increase the WORKING-STORAGE limit, you have to either (1) reduce the size of the tables, or (2) use more than one program to keep each program's WORKING-STORAGE under the limit.
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
I hope Dick doesn't mind be butting in.
He has given you a very clear, maintainable solution to your problem, and one which once you know it also has other applications.
In his description, he is already clear through the use of well-chosen names and reasonable comments.
Your program contains two very large tables. It would allow your program to compile if the tables were not in your program, but somewhere else, where you could use them.
So, you need somewhere else to put your tables.
Pre-ADDRESS OF, the way to do this would be for your program to be a sub-program. Program A calls B with first Gigantic Table. Program B calls your program with Gigantic Table from its LINKAGE (from Program A) and its own Gigantic Table from WORKING-STORAGE. Your program has two Gigantic Tables in LINKAGE and names them on the PROCEDURE DIVISION USING.
So your program uses storage defined in A and B and gets along happily.
With ADDRESS OF, you can achieve the same effect, with more flexibility, by calling A and B, which will each then give you the ADDRESS of the Gigantic Tables that they define, which is all you need to be able to use that storage in your program. Look at is as having access to three Working Storage's. Or more, if you need.
You do have a further possible solution using LE Heaps to get the storage.
Search in this forum for that solution.
For a use like yours, I think Dick's solution is the one.
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
over the last 2 or 3 decades, I have written a few programs on 3 different types of mainframe hardware. only once ran out of space, after which i rehoned my progamming techniques.
what will happen when you need to store more than 500k ITEMS - records are things found in files, not db2 tables, not cobol internal tables, ONLY FILES (and table spaces).
500k items is a lot.
165 bytes for each item in a cobol internal table, is a lot.
If you know that you will always need an enormous amount of memory for your program to process,
it is better to acquire it thru working-storage.
If sometimes you need a lot, and sometimes you don't,
using the LE getmain /get memory functions are better.
I can only assume that you have rudimentary skills, since you did not comprehend my suggested solution.
based on that (prejudice),
I can only assume that you do not have a good design concept.
if you explain what your requirement is,
we could help with the design.
there are other methods that you can employ:
break the items up into 2 or more tables - each table at occurance x will comprise the complete item in question
use file match merge processing
...incomplete list of suggestions due to lack of information of the requirement
one of the problems of large memory acquisition is paging.
after a while, paging can slow your program down (and the system as a whole)
and you will have to redesign, anyway.
like most posts, i imagine you will ignore the request for more info,
because you want to do it your way.
if so, you have been given solutions to your space problem.
also, PGM-a and PGM-b only needs one working-storage entry,
01 table-area pic x(128,000,000). (or 127,999,990 - whatever)
no need to define the table with occurs structure in Pgm-a and pgm-b.
Basically I need to compare the records of two VSAM files.
This comparision is a bit complex in a sense that
1. there are some acceptable differences which i need to ignore
2. Record has a. item key b. item description. one item key may contain lacs of description distinguished by sequence number..
3. the description coming in two files for an item key is not is sequence. So to compare all the description for an item-key, I am planning to store them in two internal tables. then i am planning to check for an item-key if every description of 1st Table is present in 2nd table(anywhere) or not.If not present in 2nd Table then it will go to output report as Deleted description and for vice versa case as Inserted record.
Hope this helps you to give an idea of my requirement to help me.
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
in that case i would consider the following:
growth of files means that eventually you will encounter more items than you can store.
would determine what data i need at any time to make a decision,
if necessary making up new KEYS.
i would extract all the data from both vsam files via SORT, creating qsam files.
i would reformat where necessary,
making duplicates where necessary,
and append the 'new keys' where appropriate.
sort both files so that my program would involve match merge logic,
storing - on-the-fly - only the data necessary for the current key match processing.
that means potentially, reloading the cobol internal tables for each 'key match'.
i would buffer the qsam files up to a max (BUFFNO=? dcb parm on input files).
then the program would consist of:
load data for current key
process current key
start loop again
in that way, your requirement for a large program (cobol internal table) would be reduced.
a lot of work may be off-loaded to sort, inorder to remove those situations where your program would normally ignore something.
Thank you for your suggestion !
But really speaking I didnt get the clear idea. If its possible for you to elaborate or to be in detail about what you are trying to say..
Also could you please suugest a solution or a way in my design thinking...
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
As Dick as suggested, the first thing to do is to break it down.
With a complicated program, this is always what I do. What I mean here by "complicated" is doing lots of different things. If you do that in one program, you will get more errors, as the more you code, the more errors you get. Plus, when you run, any part of it fails, the whole lot fails. Plus performance. Lots of things to help make things simple, maintainable and convenient.
There is nothing in the world stopping you copying your VSAM files to QSAM files (flat).
If one or both need a new key, give it one. If one or both then need to be resequenced for your requirment, stick 'em through a sort. If you need to "do something" that is only possible by resequencing, yet you need it in your program in the original order, then add a sequence number, sort on the other key and do your processing, then sort on the sequence number to get back to the original order.
With simple extract/sort, get the files to the ideal state for your program. You are not (or should not be) constrained by the way the files are currently designed/held. You can get the files how you want them. Do it in simple steps. Easy to understand. Easy to test. Easy for re-run. Maintainable.
Once you have files ready and reformatted for your program, read doing the match as you go along. Do the match, which is the important part of the logic, which you use the format/sequence of the data to aid your requirement.
Write your output as flat file(s) as well.
Finally, stick 'em (if required) into nicely defined VSAM file(s) (ie defined logically from how the file(s) will be used).
Done.
We can only give this sort of outline, which is what Dick has already done, without having the full spec (which we don't want unless you get really stuck).
Again:
In as many simple steps as are needed, extract/format/sequence the files/data you require for your processing.
Do your matching program.
Produce a fine VSAM file optimised for how it will be used (if necessary).
Keep everything simple. If something gets complicated, simplify it. Work out your full design, starting from rough outline (as already provided in the forum) and firm it up until you can go through the design and know that a "system" following your design will work.
I have tried using your approach of 3 modules as per you post above at Posted: Thu May 19, 2011 2:25 pm
Now it is showing the same compile error for Linkage Section of Primary modules since the linkage section of primary module contains 2 gigantic table declaration.
IGYDS1267-S THE SIZE OF THE "LINKAGE SECTION" EXCEEDED THE COMPILER LIMIT OF 128 MEGABYTES OF MEMORY. EXECUTION RESULTS ARE UNPREDICTABLE.
Please look into this and let me know your valuable suggestion.
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
You have two tables, each occurs 500,000, 165 bytes per table element (from memory).
That is how you need to define the tables that allocate the storage.
The tables you have in the Linkage Section do not allocate storage.
But your tables exceed 128mb - that means that the definition only exceeds 128mb.
Now, the problem is that the Cobol compiler allocates address areas to refer to Linkage (BLLs, which I always understood to be called Base Locator Linkage, but don't quote me). I guess the compiler is warning you that it might not be able to reference everything defined in your linkage section through its BLL scheme.
However, this need not matter.
If you really need to go this route, here's the trick.
Mmmm... you say. But I wanted 500,000 of them. Well, make sure your subscript is big enough to hold 500,000. The Linkage Section does not define storage, so there is no need in the Linkage Section to have (in this case) more than one occurs.
Of course, you'll need SSRANGE off and you won't be able to use SEARCH ALL with Occurs Depending On. You can write your own "binary chop" search.
I still think you're better off going down a different route, but if you have to do this, then you have to cut corners, like this one.
If you do this, make sure it is well-documented inside and outside the program.
I wouldn't bother to do the Linkage that way generally, as you are loosing information (for the human reader) about the table. Hence, document it whereas normally you woulndn't need to.
Each 01 level gets a BLL (look at your DMAP listing). The BLL is used with a "displacement" of three hex digits (000-FFF) which allows each BLL to reference 4096 bytes of storage. If you look at LI2-ITEM it will have a new BLL.
Effectively, the Cobol compiler is using the BLLs to reference individual fields. However, for fields which are part of an Occurs, it only uses the BLL of the starting point, and calculates using that address and the number of the occurrence you are interested in with reference to the current subscript/index you are using. For your huge table, the compiler would generate many BLLs which it would never generate any code to use (I don't know if the Optimizer would clean them up?).
Bear one additional thing in mind. We've been showing you how to get around the storage limits imposed by the compiler. There is no implication that we have ever done this with tables of your size. You might get to a dead-end. If so, don't blame us. You can ask us, maybe we can think of something else, but there is no guarantee that we have done what you are doing with tables of your size.
If you are still, still, going this route, I think the first thing you need to now do is thoroughly test out the addressability in Cobol of your two tables. You don't want to find out that you can't do it after you've coded everything for that specific approach.
Did you change your compile option for 31-bit data?
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
some shops do not allow NOSSRANGE.
though what you have provide, BILL, will work (and I have used it myself, sometimes),
another possible solution
(if you do not have to reference both tables in the same statement)
is use one 01, use two 05's, one redefining the other, and
flip-flop between table a and table b with a set (the 01 linkage item) to pointer phrase.
actually, I agree with Bill, and a non-large table process be used.
problem with two 500,000 165-char item tables,
you are trying to load everything.
as I said before,
what are you going to do whtn 500,000 165-char items is not enough?
since you need to compare both at the same time,
move an item from table a to working storage
flip the address
move an item from table b to working storage,
do the compare.
but again,
you think that you can load the world to memory and play with it.
can't always do that, might as well learn how to design a complete process
instead of attempting to solve the problem in one program.
By Using Bill Suggestion compile got error free..but I am afraid about the run time abend:
CEE0813S Insufficient storage was available to satisfy a get storage (CEECZST)
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
dbzTHEdinosauer wrote:
last but not least,
primary module would CALL the submodules to
load an item to table
search for an item in the table and return the value
address the next item in the table and return the value
that of course would involve developing a pass-a-long area (1) for each module, that would contain
FUNCTION CODE
any values required for the function
a return area for table item value
a return area for return code of the function call
This would mean you would not address each table in Primary module,
but you would use two modules to store and retrieve data.
since each table is only 8mg you have growth potential.
you would develop experience controlling two 'black boxes'.
this is the solution that I would use,
if i was so thck-headed as not to employ some other methodology.
Nice approach. Also nice that once you have it, you'll be able to use the "black box" generallly for other things, not just because you have a big table.
80mg is the size of the table (the original 8mg was a typo). That is a little close for comfort.