View previous topic :: View next topic
|
Author |
Message |
reachsenthilnathan
New User
Joined: 20 Nov 2005 Posts: 15
|
|
|
|
Hi all,
In a batch program, we are trying to load more than 2 GB data into program memory using CEEGTST subroutine(GETMAIN). The subroutine always fails around 1.8 GB. Any request for memory allocation above 1.8 GB fails.
Do anyone know limit for GETMAIN in Batch/CICS?
Also to go above 2 GB, we need to use AMODE 64. Does Cobol supports 64 bit compilation?
Thanks,
Senthil Nathan |
|
Back to top |
|
|
Bill O'Boyle
CICS Moderator
Joined: 14 Jan 2008 Posts: 2501 Location: Atlanta, Georgia, USA
|
|
|
|
WOW, 2GB of dynamic-storage? Why so much?
I don't think it would matter, but what are you specifying on your REGION parm?
COBOL does not support 64-Bit at this time, although there are rumblings that it may be supported in the future. PL/I, C and Assembler all support 64-Bit.
CEEGTST can be used by COBOL, PL/I, C, Assembler and (maybe) JAVA.
You could also look into an Assembler sub-program which uses the STORAGE OBTAIN Macro or the GETMAIN Macro.
The SIZE fullword on "CEEGTST" is signed and therefore, has a limit of X'7FFFFFFF', which I believe is also the limit on STORAGE OBTAIN and GETMAIN as both of them use register-notation to hold the requested length.
I don't think you can use an unsigned fullword, because LE may view this as a negative value if you specify a value greater than X'7FFFFFFF'.
Unless your User EDSA size is very high, you won't be able to GETMAIN this much storage in CICS. Personally, I wouldn't allow it as you could easily raise ab S80A (Virtual Storage Exhausted).
You may want to review this design....
Bill |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
Quote: |
Do anyone know limit for GETMAIN in Batch/CICS? |
From the CICS Application Programming Reference manual:
Quote: |
FLENGTH(data-value)
specifies the number of bytes of storage required, in fullword binary format.
The maximum length that you can specify is the value of the corresponding DSA limit parameter, either DSALIMIT or EDSALIMIT. These are the system initialization parameters that define the overall storage limit within which CICS can allocate and manage the individual DSAs. Further, be aware that there is an absolute limit based on the storage assigned to the LPAR -- but again your site support group would be the ones to talk to.
If the length requested is bigger than the DSALIMIT or EDSALIMIT value, the LENGERR condition occurs. If it is not bigger than these limits, but is more than is available, NOSTG occurs. |
You need to ask your site support group what the limit is since your site support group set up the CICS region and know the DSALIMIT and EDSALIMIT values for that CICS region.
Quote: |
Does Cobol supports 64 bit compilation? |
AFAIK, no version of Enterprise COBOL supports 64-bit addresses (yet). |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
again, the question, why so much memory?
is this related to a LOB/CLOB/BLOB DB2 action? |
|
Back to top |
|
|
reachsenthilnathan
New User
Joined: 20 Nov 2005 Posts: 15
|
|
|
|
Bill O'Boyle wrote: |
WOW, 2GB of dynamic-storage? Why so much?
I don't think it would matter, but what are you specifying on your REGION parm?
COBOL does not support 64-Bit at this time, although there are rumblings that it may be supported in the future. PL/I, C and Assembler all support 64-Bit.
CEEGTST can be used by COBOL, PL/I, C, Assembler and (maybe) JAVA.
You could also look into an Assembler sub-program which uses the STORAGE OBTAIN Macro or the GETMAIN Macro.
Unless your User EDSA size is very high, you won't be able to GETMAIN this much storage in CICS. Personally, I wouldn't allow it as you could easily raise ab S80A (Virtual Storage Exhausted).
Bill |
Hi Bill,
Thanks for the reply. I will explain the process today and the need for 2 GB memory in future.
We have a VSAM with 1.5 GB of data which is used more as a lookup in our batch programs. We load this VSAM entirely into program memory allocated using GETMAIN. The VSAM grows at a rate of 5% a year and hence we were wondering whether we can load 2 GB or > 2 GB to memory. We are also exploring options like BLSR. We will explore the option of Assembler. Can we link-edit a cobol program with AMODE=31 with an assembler code which uses 64 bit addressing?
Thanks again for your reply. |
|
Back to top |
|
|
reachsenthilnathan
New User
Joined: 20 Nov 2005 Posts: 15
|
|
|
|
Robert Sample wrote: |
Quote: |
Do anyone know limit for GETMAIN in Batch/CICS? |
From the CICS Application Programming Reference manual:
Quote: |
FLENGTH(data-value)
specifies the number of bytes of storage required, in fullword binary format.
The maximum length that you can specify is the value of the corresponding DSA limit parameter, either DSALIMIT or EDSALIMIT. These are the system initialization parameters that define the overall storage limit within which CICS can allocate and manage the individual DSAs. Further, be aware that there is an absolute limit based on the storage assigned to the LPAR -- but again your site support group would be the ones to talk to.
If the length requested is bigger than the DSALIMIT or EDSALIMIT value, the LENGERR condition occurs. If it is not bigger than these limits, but is more than is available, NOSTG occurs. |
You need to ask your site support group what the limit is since your site support group set up the CICS region and know the DSALIMIT and EDSALIMIT values for that CICS region.
Quote: |
Does Cobol supports 64 bit compilation? |
AFAIK, no version of Enterprise COBOL supports 64-bit addresses (yet). |
Thank you Robert. |
|
Back to top |
|
|
gylbharat
Active Member
Joined: 31 Jul 2009 Posts: 565 Location: Bangalore
|
|
|
|
I assume this VSAM is read only file. Why this VSAM file can't be used in DISP=SHR mode across batch programs? |
|
Back to top |
|
|
daveporcelan
Active Member
Joined: 01 Dec 2006 Posts: 792 Location: Pennsylvania
|
|
|
|
Here is an alternate design you may want to consider.
In a similar situation I had encountered once, through testing an analysis, we discovered that 10% of the lookup data accounted for 95% of the lookups.
The new design was this:
1) Add a field to the vsam to designate the 'heavy hitters'
2) Add a sort step to put the heavy hitters in sperate file.
3) Load that file into memory (internal table)
4) In the program, first search the internal table for a match.
5) If not found, do a key read from the full vsam file to do the lookup, and add that record to the internal table for subsequent lookups.
This is not a major change, but could improve your processing.
Come up with reasons why this would not work.
or
Come up with ways to make this work.
I am taking bets on which is chosen. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
I think some more details about your VSAM file would be good. Any chance of a LISTCAT?
I think Dave has something which may start you to a more workable solution. I'm not sure how good an idea locking up 2gb of virtual memory on a semi-permanent basis is, particularly with potentially random access across it. It is an interesting idea, but I suspect there are other approaches with benefits for you which would be easier to develop/maintain. |
|
Back to top |
|
|
PeterHolland
Global Moderator
Joined: 27 Oct 2009 Posts: 2481 Location: Netherlands, Amstelveen
|
|
|
|
I have the inclination that the last 3 topics (under different titles) of the TS are addressing the same question/problem. |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
dave's approach also tends to lesen the page-swap and thrash counts.
sometimes too much in memory has its own disadvantages. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
As these 3 topics indicate (at least to me), a questionable approach is being pursued. I suspect that even if implemented, it will not prove satisfactory long term.
Now may be a good time to conisider some alternative approach. . .
fwiw. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Activity on this/these topic/s Thursday, but none Friday. Going back through them...
reachsenthilnathan wrote: |
[...]
We have a VSAM with 1.5 GB of data which is used more as a lookup in our batch programs. We load this VSAM entirely into program memory allocated using GETMAIN. [...] |
So you're saying you're already sticking the data into virtual memory, in a number of batch jobs?
Problem with virtual memory is that a lot of the time a lot of your data will be residing on DASD anyway. Then you'll get a "hit" for a new "page", and it'll come off DASD into virtual memory mapped onto real memory. Then, as requests for other pages come along, off it goes back to DASD. If you have keys distributed evenly across 1.5 gig of data, you are going to generate a lot of paging, and likely "thrashing". I hope your CPU is securely fastened to the floor.
What you have done (if indeed you have done it) is swap one type of DASD usage (the VSAM dataset) for another (huge table in virtual memory).
Unless your key-processing is at least as clever as VSAM's, are you really getting any performance benefit? Did you optimise the VSAM dataset? Or were you using a dataset "split" all to hell and back, with default buffers, irrational CA/CI sizes etc?
Do you have to run your batch jobs in sequence? Or do you throw as many as you can on at once and have 1.5 gig for each of them, each slowing the other down with incessant paging?
Something along the lines of what Dave Porcelan is suggesting would blow your current and proposed future method out of the water, with enormous ease - very probably. Perhaps the key usage is generally flat across the data. Perhaps the dataset was optimised. Perhaps you have a really cool indexing system for the data in virtual storage.
However, as others also suspect, perhaps a re-design is in order.
But if you don't come back with more information, at least knock-up a version of Dave's suggestion and do some timings against the VSAM and the Virtual Storage methods. Make sure the VSAM dataset is optimal, so the timings are fair. Digest, and act accordingly. |
|
Back to top |
|
|
|