IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

Easytrieve logic to read a file write to an external table


IBM Mainframe Forums -> CA Products
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Mon Aug 13, 2007 5:32 pm
Reply with quote

Is there anyway to read data from a file and load it in an external table using easytrieve?
Back to top
View user's profile Send private message
Devzee

Active Member


Joined: 20 Jan 2007
Posts: 684
Location: Hollywood

PostPosted: Mon Aug 13, 2007 7:52 pm
Reply with quote

Yes you can read file in easytrive.

Quote:
to an external table
Is this Db2? or what is this external table
Back to top
View user's profile Send private message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Tue Aug 14, 2007 2:55 pm
Reply with quote

See the explanation for external table in this link :

ibmmainframes.com/viewtopic.php?t=1635&highlight=easytrieve+table+limit

I need to load the input from the file given in the DDname to the table defined inside the program
as :

FILE table1 TABLE
ARG ...
DESC ...
ENDTABLE

Also tell me is there any way to get the maximum number of entries in the table in a WS variable which will be coded in the program as:
FILE table1 TABLE(max.no.entries)
ARG ...
DESC ...
ENDTABLE
Back to top
View user's profile Send private message
William Thompson

Global Moderator


Joined: 18 Nov 2006
Posts: 3156
Location: Tucson AZ

PostPosted: Tue Aug 14, 2007 3:42 pm
Reply with quote

Vidhya Kalyanasundaram wrote:
I need to load the input from the file given in the DDname to the table defined inside the program
as :

FILE table1 TABLE
ARG ...
DESC ...
ENDTABLE
table1 is the DD name (should be sorted).....
Quote:
Also tell me is there any way to get the maximum number of entries in the table in a WS variable which will be coded in the program as:
FILE table1 TABLE(max.no.entries)
ARG ...
DESC ...
ENDTABLE
It doesn't look like it, but you could feed the dataset through the internal sort to get the count (the sort's TO would be the table).
Back to top
View user's profile Send private message
noorkh

New User


Joined: 06 Mar 2006
Posts: 76
Location: Chennai

PostPosted: Fri Aug 17, 2007 6:47 pm
Reply with quote

Hi Vidhya,

Can you brief more about your requirement? We normally use search table to read data from Jcl and to search within the program using some other field. Normally this table file work as a reference file rather than storing anything into it.

Please let us know more about your requirement so that we can understand your problem.
Back to top
View user's profile Send private message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Mon Aug 20, 2007 3:31 pm
Reply with quote

1.When a table is declared in an Easytrieve Program as :
FILE EMPLOYEE TABLE ( 1000 )
- It indicates the table, EMPLOYEE is located in a file, external to your EZT program identified by the logical name of the table file mentioned in the JCL part i.e the DDNAME.
- The literal indicates the Max.No.Of Entries in the table.

2. EASYTRIEVE PLUS build this external table EMPLOYEE,dynamically with the number of entries i.e 500 (specified in the FILE statement)just prior to its usage by loading the table dynamically with the data from the file specified in the DDNAME.

3. Here incase if the number of records from the EMPLOYEE file specified in the DD statement exceeds 1000, the table overflow takes place when it is dynamically loaded into EMPLOYEE table and the following statement get displayed in the SPOOL ending the job with MAXCC=16 :

*******A008 TOO MANY TABLE ENTRIES - EMPLOYEE
*******A014 PREMATURE TERMINATION DUE TO PREVIOUS ERROR(S)

Is there any way to load the records one by one into the table from the file and having a record count which can be incremented when each record enters the table. So that when its about to reach the Max.Entries we can display a message to increase the table size rather than job ending abruptly.
Or moving the file entries to an array variable and then searching the data using IF is the only option to resolve this when the table entries need to be searched with SEARCH TABLE statement?

Please reply to satisfy this requirement!
Thanks in advance.
Back to top
View user's profile Send private message
noorkh

New User


Joined: 06 Mar 2006
Posts: 76
Location: Chennai

PostPosted: Mon Aug 20, 2007 5:55 pm
Reply with quote

Hi Vidhya,

When you declare it as a table then it tries to read record(whenever you search for it). You can't restrict it from gettting such abend.

You need to specify maximum limit for your table. But this is not as fast as array usage. You better go for array usage.

You can't have anything like to check for maximum entries of table.

I hope it clears your doubt.
Back to top
View user's profile Send private message
IQofaGerbil

Active User


Joined: 05 May 2006
Posts: 183
Location: Scotland

PostPosted: Mon Aug 20, 2007 8:32 pm
Reply with quote

What is the reason for the 1000 table limit?
Back to top
View user's profile Send private message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Fri Aug 31, 2007 3:17 pm
Reply with quote

Hai,

When the number of records in the file TABLE1(which is specified in the DD statement) going to be loaded into the external table TABLE1 defined in the Easytreive program is 9000 but i have specified the limit for the external table as follows:

FILE TABLE1 TABLE (6000)
ARG 1 3 A
DESC 4 13 A

IARG W 3 A VALUE 'AAN'
IDESC W 13 A

*
JOB INPUT NULL
DISPLAY IARG
STOP EXECUTE

As the table limit has exceeded it should throw an error as follows:
*******A008 TOO MANY TABLE ENTRIES - TABLE1
*******A014 PREMATURE TERMINATION DUE TO PREVIOUS ERROR(S)

----This code dint throw an error

But it throws the same error only when we perform a search on that table as:

FILE TABLE1 TABLE
ARG 1 3 A
DESC 4 13 A

IARG W 3 A VALUE 'AAN'
IDESC W 13 A

*
JOB INPUT NULL
DISPLAY IARG
SEARCH CORRA WITH IARG GIVING IDESC
STOP EXECUTE


So we can come to a conclusion that the table is getting loaded only after the job statement during the SEARCH operation.

Is there any way to trace the count of records in the file before the JOB statement?
Please help me in getting the count in a variable before the job statement
Back to top
View user's profile Send private message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Mon Sep 03, 2007 3:46 pm
Reply with quote

Hai sorry ,

The second code in which the error is thrown is as follows:

SEARCH TABLE1 WITH IARG GIVING IDESC


and not SEARCH CORRA
Back to top
View user's profile Send private message
IQofaGerbil

Active User


Joined: 05 May 2006
Posts: 183
Location: Scotland

PostPosted: Mon Sep 03, 2007 8:37 pm
Reply with quote

Once more - why a limit?
what happens when that limit is breached?
It looks like you want to know when your coded limit is about to be broken then you will change your code to increase it, is that correct?
How long does this cycle of check, increase continue?
Is there an absolute upper limit, if so what is it?
Back to top
View user's profile Send private message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Tue Sep 04, 2007 10:38 am
Reply with quote

Instream and External tables were used in Easytrieve Programs as the TABLE parameter of the FILE statement declares that the file as the object of a SEARCH statement that accesses tables.

So after declaration of the TABLEs in the FILE statement, the SEARCH STATEMENT can be included in the JOB-SECTION as:

SEARCH Statement Explanation:
The SEARCH statement performs a search of a table.
SEARCH can be --> 1.Coded any place in a JOB activity
2.Issued any number of times against any number
of tables.
The SEARCH statement has this format:

SEARCH filename WITH field-name-1 GIVING field-name-2

filename - Filename is the name of the table that appears on the FILE statement.

field-name-1 - Field-name-1 is the name of a field that contains a value that is compared to the search argument. It must be the same length and type as the search argument (ARG).

field-name-2 - Field-name-2 is the name of a field into which the description is placed if a match exists between field-name-1 and the search argument. It must be the same length and same type as the description (DESC).

Since this search does a BINARY SEARCH(whereas loading the data into an array performs only sequential search), this is an efficient searching technique.
But the drawback here is, we have to specify the LIMIT(the value i.e.the no.of.records, we expect out of the file from which it will be loaded) in the table declaration section itself(as the default value is just 256) and incase the no.of.records coming from the PS file exceeds the value what we have coded as LIMIT, the TABLE OVERFLOW takes place with MAXCC=16.

Yes, as you said ,I am trying to avoid this abend and overcome this by by knowing when it is about to broken???

So need some logic to have a count???
Help me for the same!!!!
Thanks in advance
Vidhya
Back to top
View user's profile Send private message
IQofaGerbil

Active User


Joined: 05 May 2006
Posts: 183
Location: Scotland

PostPosted: Tue Sep 04, 2007 9:35 pm
Reply with quote

I thought that the 256 limit was for instream tables not external ones like yours?

Anyhoo,
Why not try a different approach?

Even if you do get a solution to this problem by allocating a counter, there may come a point at which the table will be too big for Easytrieve to handle ie not enough core storage, giving

*******A003 INSUFFICIENT CORE STORAGE AVAILABLE

Now if you think that point will never be reached then why not find what the max value is that can be attributed to the tablesize and stick with that, or something closer to what your analysis of maximum future data might be?
It took me five minutes to find the max value in my shop for the table sizes in your example (ie over 300,000)

However if your data is likely to cause the storage problem sometime in the future, then you should rethink your design. eg sort both files and do a two file match possibly?

Good luck.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Tue Sep 04, 2007 9:46 pm
Reply with quote

Hello,

Quote:
However if your data is likely to cause the storage problem sometime in the future, then you should rethink your design. eg sort both files and do a two file match possibly


If your table/array is to contain more than just a few "records", you should sort both sets of data and do a 2-file match. To scan a large array/table over and over is just a complete waste of machine resources.
Back to top
View user's profile Send private message
Vidhya Kalyanasundaram

New User


Joined: 19 Jul 2007
Posts: 30
Location: chennai

PostPosted: Thu Sep 13, 2007 5:06 pm
Reply with quote

Thank You !
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Thu Sep 13, 2007 10:31 pm
Reply with quote

You're welcome icon_smile.gif

How was your question resolved? If you post your solution, it may help someone with a similar question sometime.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> CA Products

 


Similar Topics
Topic Forum Replies
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts Error to read log with rexx CLIST & REXX 11
No new posts Extracting Variable decimal numbers f... DFSORT/ICETOOL 17
No new posts Load new table with Old unload - DB2 DB2 6
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
Search our Forums:

Back to Top