IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Error in FTP using REXX


IBM Mainframe Forums -> CLIST & REXX
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Tue Sep 30, 2008 4:07 pm
Reply with quote

Hi all,

I have a REXX which does the following.

1.Splits the input file(LRECL=178) into 'n' based on some criteria.
2.Writes each splitted file as a member of a PDS.
3.Finally ftps each member.

This step executes fine with 1 million records. But gets a user abend U4093 when I have 3 millions records. The strange part is the same step runs fine with 5 million records with an input lrecl of 130.

Below is the error message I get. I am using REGION=0M in my job
Code:
IEW4000I FETCH FOR MODULE CEEEV003 FROM DDNAME -LNKLST- FAILED BECAUSE INSUFFICICIENT STORAGE WAS AVAILABLE
CSV031I LIBRARY ACCESS FAILED FOR MODULE CEEEV003, RETURN CODE 24, REASON CODE 26080021, DDNAME *LNKLST*
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Mon Oct 06, 2008 5:59 pm
Reply with quote

Hi all,

Now I m facing another issue with this job.

My input file has more than 5 million records. The job splits the input file based on two fields(field1,field2) and writes into different members of a PDS. I ran a sort to find the expected record counts in each member and got the below results.
Code:
field1,field2,record-count
01,12,   483042
01,13,   125799
01,14,   331999
01,16,   262654
01,31,   272006
01,32,  1113150
01,33,    51759
.......
.......

Out of this "01,32" has the maximum rec-count. The job is abending with ABEND S0F9 and I dont see any other error messages. When I browse the output PDS, I have all the members from "01,12" thru "01,31".
If I remove all the records for "01,32" from the input the job runs successfully.
If I give only "01,32" records(1113150) in the input, the job is failing with ABEND S0F9. I tried reducing the input count to 1 millon and the job ran fine.

The REXX reads the input file in bunches of 10,000 records, writes into output stem variable until the key vaule changes. This job has been running in production without any issues until recently. Is there any limit for number of occurrences allowed in a STEM variable? Could somebody throw some light into this. Let me know if any further explanation is required.
Back to top
View user's profile Send private message
superk

Global Moderator


Joined: 26 Apr 2004
Posts: 4652
Location: Raleigh, NC, USA

PostPosted: Mon Oct 06, 2008 6:01 pm
Reply with quote

arcvns wrote:
Is there any limit for number of occurrences allowed in a STEM variable?


Yes, of course. They occupy main storage and will eventually exhaust all that is available. Stem variables are not meant to hold large amounts of data.
Back to top
View user's profile Send private message
Robert Sample

Global Moderator


Joined: 06 Jun 2008
Posts: 8696
Location: Dubuque, Iowa, USA

PostPosted: Mon Oct 06, 2008 6:04 pm
Reply with quote

Did you read the manual for the S0F9 ABEND? If not, start by reading that. This system error code has absolutely nothing to do with REXX or the number of records being read -- so apparently you're just guessing about the problem. Until you've read the messages and codes manual and know what the problem is, any solution you implement is likely to make things worse than better.
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Mon Oct 06, 2008 6:11 pm
Reply with quote

Kevin,

Thanks for your prompt response. Is is possible to achieve the same without using a Stem variable?
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Mon Oct 06, 2008 6:34 pm
Reply with quote

Robert,

I just went through the manual and found this for S0F9 abend

I believe this confirms Kevin's point that the main storage might have got exhausted by writing large number of records into a Stem variable.
Back to top
View user's profile Send private message
Pedro

Global Moderator


Joined: 01 Sep 2006
Posts: 2546
Location: Silicon Valley

PostPosted: Mon Oct 06, 2008 10:45 pm
Reply with quote

Quote:
The REXX reads the input file in bunches of 10,000 records, writes into output stem variable until the key vaule changes


You should be able to write in bunches of 10,000 records. Or what happens when the key value changes?
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Tue Oct 07, 2008 9:16 am
Reply with quote

Quote:
Or what happens when the key value changes?

As per my understanding of the code, this is what it does.

1. Read 10,000 records into input stem - In_Rec.

2. From the stem In_Rec., read records one by one and check for key. If key is same, accumulate each record into output stem- Out_Rec. after some reformatting. As soon as it encounters a new key, writes all the accumulated records in output stem - Out_Rec. as a PDS member.

3. Repeat step2 until we are done with all the 10,000 records.

4. Repeat the process from step-1.

But when a particular key has more records the above logic is failing while accumulating into the output stem.
Back to top
View user's profile Send private message
gcicchet

Senior Member


Joined: 28 Jul 2006
Posts: 1702
Location: Australia

PostPosted: Tue Oct 07, 2008 9:58 am
Reply with quote

Hi,

this is from a novice REXX writer, why can't the records be written out directly to the PDS rather than saving them into an output stem, also why do they need to be read into an input stem ?


Gerry
Back to top
View user's profile Send private message
Pedro

Global Moderator


Joined: 01 Sep 2006
Posts: 2546
Location: Silicon Valley

PostPosted: Tue Oct 07, 2008 10:07 am
Reply with quote

But I think one key has over a million records. So they are gathering many records before writing out.

My earlier suggestion was to write out the records accumulated when you read more records. But do not close the output file until done. Use a PDSE (not sure, but there may be some special considerations for a PDS).
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Tue Oct 07, 2008 10:57 am
Reply with quote

Hi,

Thanks to all for sharing their views. I'll try to work on Pedro's suggestion and get back soon icon_smile.gif
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Tue Oct 07, 2008 7:24 pm
Reply with quote

Ped,

I did some analysis and ended up here. REXX manual says,

Quote:
To append information to a member of a PDS, rewrite the member with the additional records added.
icon_sad.gif
Back to top
View user's profile Send private message
Pedro

Global Moderator


Joined: 01 Sep 2006
Posts: 2546
Location: Silicon Valley

PostPosted: Tue Oct 07, 2008 9:18 pm
Reply with quote

Quote:
To append information to a member of a PDS, rewrite the member with the additional records added.

That is to append new records to an old member.

But I think what you want is to write some records to a new member, then write some more records.
Back to top
View user's profile Send private message
Arun Raj

Moderator


Joined: 17 Oct 2006
Posts: 2481
Location: @my desk

PostPosted: Sun Oct 12, 2008 5:46 pm
Reply with quote

Quote:
But I think what you want is to write some records to a new member, then write some more records.

Yes. exactly. I tried modifying the logic and it worked fine when I gave the 'problem' key records alone(1113150 records). Thank you all icon_biggrin.gif
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> CLIST & REXX

 


Similar Topics
Topic Forum Replies
No new posts Running REXX through JOB CLIST & REXX 13
No new posts Error to read log with rexx CLIST & REXX 11
No new posts Error when install DB2 DB2 2
No new posts isfline didnt work in rexx at z/OS ve... CLIST & REXX 7
No new posts run rexx code with jcl CLIST & REXX 15
Search our Forums:

Back to Top