IBM Mainframe Forum Index
 
Log In
 
IBM Mainframe Forum Index Mainframe: Search IBM Mainframe Forum: FAQ Register
 

How to close a file in eztrieve ?


IBM Mainframe Forums -> CA Products
Post new topic   Reply to topic
View previous topic :: View next topic  
Author Message
rohitsir

New User


Joined: 21 Aug 2007
Posts: 32
Location: USA

PostPosted: Thu Aug 23, 2007 1:27 am
Reply with quote

I know eztrieve does the opening and closing of files automatically . But here is what i want to do.

I am reading a file record by record. In a particular condition i want to close the file so that if i start reading the file again, it starts from the beginning. How can i achieve it ?
Back to top
View user's profile Send private message
William Thompson

Global Moderator


Joined: 18 Nov 2006
Posts: 3156
Location: Tucson AZ

PostPosted: Thu Aug 23, 2007 12:48 pm
Reply with quote

You can't......
Back to top
View user's profile Send private message
bijumon

New User


Joined: 14 Aug 2006
Posts: 20
Location: Pune,India

PostPosted: Thu Aug 23, 2007 12:57 pm
Reply with quote

Hi,

It cant be done as pointed out by William, you can write a cobol program, or if your input file is a VSAM then you can use "POINT" to position the file pointer to the first record and start reading it again.

Thanks & Regards,
---------------------
Biju
Back to top
View user's profile Send private message
stodolas

Active Member


Joined: 13 Jun 2007
Posts: 632
Location: Wisconsin

PostPosted: Thu Aug 23, 2007 6:45 pm
Reply with quote

Re-reading from the beginning is also very resource intensive. You may be better off to sort the file in the order you need so you don't have to restart from the beginning.
Back to top
View user's profile Send private message
dbzTHEdinosauer

Global Moderator


Joined: 20 Oct 2006
Posts: 6966
Location: porcelain throne

PostPosted: Thu Aug 23, 2007 6:48 pm
Reply with quote

load the file into an ezytrieve table
Back to top
View user's profile Send private message
rohitsir

New User


Joined: 21 Aug 2007
Posts: 32
Location: USA

PostPosted: Thu Aug 23, 2007 10:40 pm
Reply with quote

I am already doing the sorting of file to achieve what i want to. But its taking too much of a time. But i think i have to live with it.

Thanks all for your replies.
Back to top
View user's profile Send private message
socker_dad

Active User


Joined: 05 Dec 2006
Posts: 177
Location: Seattle, WA

PostPosted: Thu Aug 23, 2007 11:34 pm
Reply with quote

Too much time to sort? icon_confused.gif

Pray tell that you aren't doing your sorting within Easytrieve. icon_exclaim.gif
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Fri Aug 24, 2007 12:54 am
Reply with quote

Hello,

Are you trying to "match" 2 files in this manner?

If you describe both input files and what you need to do with them, we may have better performing altgernatives to offer.
Back to top
View user's profile Send private message
rohitsir

New User


Joined: 21 Aug 2007
Posts: 32
Location: USA

PostPosted: Sat Aug 25, 2007 12:29 am
Reply with quote

Well, sorting is taking so much time because it has around 100 million records in it.

Here is in detail wut i am trying to do.

I have 2 files - File 1 sorted Acct Nos. File 2 is a TXN History file. Its structure is like this:

(Acct No) (Transaction detail)
--------------------------------------------
100 --------------- tran 1 detail.........
100 --------------- tran 2 detail.........
100 --------------- tran 3 detail.........
300 --------------- tran 1 detail.........
300 --------------- tran 2 detail.........
200 --------------- tran 1 detail..........
200 --------------- tran 2 detail.........


I have to dump all the records from file 2 for which we have an account number match in file 1.
Back to top
View user's profile Send private message
rohitsir

New User


Joined: 21 Aug 2007
Posts: 32
Location: USA

PostPosted: Sat Aug 25, 2007 12:44 am
Reply with quote

File 1 is sorted on Account Numbers.
Some punctuation was missing in my previous reply. icon_biggrin.gif
Back to top
View user's profile Send private message
stodolas

Active Member


Joined: 13 Jun 2007
Posts: 632
Location: Wisconsin

PostPosted: Sat Aug 25, 2007 12:52 am
Reply with quote

Dump as in remove or dump as in put to a file?
Back to top
View user's profile Send private message
rohitsir

New User


Joined: 21 Aug 2007
Posts: 32
Location: USA

PostPosted: Sat Aug 25, 2007 12:56 am
Reply with quote

Dump is to put them in a new file : FILE 3
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sat Aug 25, 2007 1:15 am
Reply with quote

Hello,

Quote:
sorting is taking so much time because it has around 100 million records
How long is "so much time"?

How many account#s are there in file1? You could read file1 into an array inside your program and then read file2 searching the array for "hits". The hits could then be written to file3.

If you ensure that file1 is in account# sequence, you could use SEARCH ALL and run more quickly.

If file1 contains too many account#s to use an in-core array, you might make a vsam file keyed by account# and load the file1 info to it. Once your process runs, the vsam file coule be deleted.
Back to top
View user's profile Send private message
CICS Guy

Senior Member


Joined: 18 Jul 2007
Posts: 2146
Location: At my coffee table

PostPosted: Sat Aug 25, 2007 1:18 am
Reply with quote

rohitsir wrote:
Well, sorting is taking so much time because it has around 100 million records in it.
When all you have is a hammer, everything looks like a nail.....
Maybe EZT is not the best tool for this requirement.....
Back to top
View user's profile Send private message
rohitsir

New User


Joined: 21 Aug 2007
Posts: 32
Location: USA

PostPosted: Sat Aug 25, 2007 1:23 am
Reply with quote

I have implied this logic finally.

I have made file1 a VSAM file ( it has only 500 records as compared to file 2 which has 100 million records).

I read file 2 first, and based on its account number, do a read on FILE 1. If a match is found, i write that record in to file 3.

NO need or sorting the file 2 in this case.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sat Aug 25, 2007 1:24 am
Reply with quote

Sounds like a plan icon_smile.gif

d
Back to top
View user's profile Send private message
stodolas

Active Member


Joined: 13 Jun 2007
Posts: 632
Location: Wisconsin

PostPosted: Sat Aug 25, 2007 1:27 am
Reply with quote

Well a single sort step could have taken care of this all in one step. Sorting the 2 files together and dumping to a 3rd file on matches.
Back to top
View user's profile Send private message
dick scherrer

Moderator Emeritus


Joined: 23 Nov 2006
Posts: 19244
Location: Inside the Matrix

PostPosted: Sat Aug 25, 2007 1:51 am
Reply with quote

The problem being the 100million records that are not already in sequence. . .
Back to top
View user's profile Send private message
lcmontanez

New User


Joined: 19 Jun 2007
Posts: 50
Location: Chicago

PostPosted: Sat Aug 25, 2007 1:57 am
Reply with quote

FYI, you don't need a vsam file use a table for only 500 accounts.

SEARCH ACCTCODE WITH WS-ACCOUNT-NO +
GIVING XXXX

IF ACCTCODE
PUT file3 from file2
END-IF

This should work.
Back to top
View user's profile Send private message
View previous topic :: :: View next topic  
Post new topic   Reply to topic View Bookmarks
All times are GMT + 6 Hours
Forum Index -> CA Products

 


Similar Topics
Topic Forum Replies
No new posts FTP VB File from Mainframe retaining ... JCL & VSAM 8
No new posts Extract the file name from another fi... DFSORT/ICETOOL 6
No new posts How to split large record length file... DFSORT/ICETOOL 10
No new posts Extracting Variable decimal numbers f... DFSORT/ICETOOL 17
No new posts SFTP Issue - destination file record ... All Other Mainframe Topics 2
Search our Forums:

Back to Top