View previous topic :: View next topic
|
Author |
Message |
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Hi all,
I have a REXX which does the following.
1.Splits the input file(LRECL=178) into 'n' based on some criteria.
2.Writes each splitted file as a member of a PDS.
3.Finally ftps each member.
This step executes fine with 1 million records. But gets a user abend U4093 when I have 3 millions records. The strange part is the same step runs fine with 5 million records with an input lrecl of 130.
Below is the error message I get. I am using REGION=0M in my job
Code: |
IEW4000I FETCH FOR MODULE CEEEV003 FROM DDNAME -LNKLST- FAILED BECAUSE INSUFFICICIENT STORAGE WAS AVAILABLE
CSV031I LIBRARY ACCESS FAILED FOR MODULE CEEEV003, RETURN CODE 24, REASON CODE 26080021, DDNAME *LNKLST* |
|
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Hi all,
Now I m facing another issue with this job.
My input file has more than 5 million records. The job splits the input file based on two fields(field1,field2) and writes into different members of a PDS. I ran a sort to find the expected record counts in each member and got the below results.
Code: |
field1,field2,record-count
01,12, 483042
01,13, 125799
01,14, 331999
01,16, 262654
01,31, 272006
01,32, 1113150
01,33, 51759
.......
....... |
Out of this "01,32" has the maximum rec-count. The job is abending with ABEND S0F9 and I dont see any other error messages. When I browse the output PDS, I have all the members from "01,12" thru "01,31".
If I remove all the records for "01,32" from the input the job runs successfully.
If I give only "01,32" records(1113150) in the input, the job is failing with ABEND S0F9. I tried reducing the input count to 1 millon and the job ran fine.
The REXX reads the input file in bunches of 10,000 records, writes into output stem variable until the key vaule changes. This job has been running in production without any issues until recently. Is there any limit for number of occurrences allowed in a STEM variable? Could somebody throw some light into this. Let me know if any further explanation is required. |
|
Back to top |
|
|
superk
Global Moderator
Joined: 26 Apr 2004 Posts: 4652 Location: Raleigh, NC, USA
|
|
|
|
arcvns wrote: |
Is there any limit for number of occurrences allowed in a STEM variable? |
Yes, of course. They occupy main storage and will eventually exhaust all that is available. Stem variables are not meant to hold large amounts of data. |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8696 Location: Dubuque, Iowa, USA
|
|
|
|
Did you read the manual for the S0F9 ABEND? If not, start by reading that. This system error code has absolutely nothing to do with REXX or the number of records being read -- so apparently you're just guessing about the problem. Until you've read the messages and codes manual and know what the problem is, any solution you implement is likely to make things worse than better. |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Kevin,
Thanks for your prompt response. Is is possible to achieve the same without using a Stem variable? |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Robert,
I just went through the manual and found this for S0F9 abend
I believe this confirms Kevin's point that the main storage might have got exhausted by writing large number of records into a Stem variable. |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2546 Location: Silicon Valley
|
|
|
|
Quote: |
The REXX reads the input file in bunches of 10,000 records, writes into output stem variable until the key vaule changes |
You should be able to write in bunches of 10,000 records. Or what happens when the key value changes? |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Quote: |
Or what happens when the key value changes? |
As per my understanding of the code, this is what it does.
1. Read 10,000 records into input stem - In_Rec.
2. From the stem In_Rec., read records one by one and check for key. If key is same, accumulate each record into output stem- Out_Rec. after some reformatting. As soon as it encounters a new key, writes all the accumulated records in output stem - Out_Rec. as a PDS member.
3. Repeat step2 until we are done with all the 10,000 records.
4. Repeat the process from step-1.
But when a particular key has more records the above logic is failing while accumulating into the output stem. |
|
Back to top |
|
|
gcicchet
Senior Member
Joined: 28 Jul 2006 Posts: 1702 Location: Australia
|
|
|
|
Hi,
this is from a novice REXX writer, why can't the records be written out directly to the PDS rather than saving them into an output stem, also why do they need to be read into an input stem ?
Gerry |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2546 Location: Silicon Valley
|
|
|
|
But I think one key has over a million records. So they are gathering many records before writing out.
My earlier suggestion was to write out the records accumulated when you read more records. But do not close the output file until done. Use a PDSE (not sure, but there may be some special considerations for a PDS). |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Hi,
Thanks to all for sharing their views. I'll try to work on Pedro's suggestion and get back soon |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Ped,
I did some analysis and ended up here. REXX manual says,
Quote: |
To append information to a member of a PDS, rewrite the member with the additional records added. |
|
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2546 Location: Silicon Valley
|
|
|
|
Quote: |
To append information to a member of a PDS, rewrite the member with the additional records added. |
That is to append new records to an old member.
But I think what you want is to write some records to a new member, then write some more records. |
|
Back to top |
|
|
Arun Raj
Moderator
Joined: 17 Oct 2006 Posts: 2481 Location: @my desk
|
|
|
|
Quote: |
But I think what you want is to write some records to a new member, then write some more records. |
Yes. exactly. I tried modifying the logic and it worked fine when I gave the 'problem' key records alone(1113150 records). Thank you all |
|
Back to top |
|
|
|