View previous topic :: View next topic
|
Author |
Message |
rohitsir
New User
Joined: 21 Aug 2007 Posts: 32 Location: USA
|
|
|
|
I know eztrieve does the opening and closing of files automatically . But here is what i want to do.
I am reading a file record by record. In a particular condition i want to close the file so that if i start reading the file again, it starts from the beginning. How can i achieve it ? |
|
Back to top |
|
|
William Thompson
Global Moderator
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
|
|
|
|
You can't...... |
|
Back to top |
|
|
bijumon
New User
Joined: 14 Aug 2006 Posts: 20 Location: Pune,India
|
|
|
|
Hi,
It cant be done as pointed out by William, you can write a cobol program, or if your input file is a VSAM then you can use "POINT" to position the file pointer to the first record and start reading it again.
Thanks & Regards,
---------------------
Biju |
|
Back to top |
|
|
stodolas
Active Member
Joined: 13 Jun 2007 Posts: 631 Location: Wisconsin
|
|
|
|
Re-reading from the beginning is also very resource intensive. You may be better off to sort the file in the order you need so you don't have to restart from the beginning. |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
load the file into an ezytrieve table |
|
Back to top |
|
|
rohitsir
New User
Joined: 21 Aug 2007 Posts: 32 Location: USA
|
|
|
|
I am already doing the sorting of file to achieve what i want to. But its taking too much of a time. But i think i have to live with it.
Thanks all for your replies. |
|
Back to top |
|
|
socker_dad
Active User
Joined: 05 Dec 2006 Posts: 177 Location: Seattle, WA
|
|
|
|
Too much time to sort?
Pray tell that you aren't doing your sorting within Easytrieve. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Hello,
Are you trying to "match" 2 files in this manner?
If you describe both input files and what you need to do with them, we may have better performing altgernatives to offer. |
|
Back to top |
|
|
rohitsir
New User
Joined: 21 Aug 2007 Posts: 32 Location: USA
|
|
|
|
Well, sorting is taking so much time because it has around 100 million records in it.
Here is in detail wut i am trying to do.
I have 2 files - File 1 sorted Acct Nos. File 2 is a TXN History file. Its structure is like this:
(Acct No) (Transaction detail)
--------------------------------------------
100 --------------- tran 1 detail.........
100 --------------- tran 2 detail.........
100 --------------- tran 3 detail.........
300 --------------- tran 1 detail.........
300 --------------- tran 2 detail.........
200 --------------- tran 1 detail..........
200 --------------- tran 2 detail.........
I have to dump all the records from file 2 for which we have an account number match in file 1. |
|
Back to top |
|
|
rohitsir
New User
Joined: 21 Aug 2007 Posts: 32 Location: USA
|
|
|
|
File 1 is sorted on Account Numbers.
Some punctuation was missing in my previous reply. |
|
Back to top |
|
|
stodolas
Active Member
Joined: 13 Jun 2007 Posts: 631 Location: Wisconsin
|
|
|
|
Dump as in remove or dump as in put to a file? |
|
Back to top |
|
|
rohitsir
New User
Joined: 21 Aug 2007 Posts: 32 Location: USA
|
|
|
|
Dump is to put them in a new file : FILE 3 |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
sorting is taking so much time because it has around 100 million records |
How long is "so much time"?
How many account#s are there in file1? You could read file1 into an array inside your program and then read file2 searching the array for "hits". The hits could then be written to file3.
If you ensure that file1 is in account# sequence, you could use SEARCH ALL and run more quickly.
If file1 contains too many account#s to use an in-core array, you might make a vsam file keyed by account# and load the file1 info to it. Once your process runs, the vsam file coule be deleted. |
|
Back to top |
|
|
CICS Guy
Senior Member
Joined: 18 Jul 2007 Posts: 2146 Location: At my coffee table
|
|
|
|
rohitsir wrote: |
Well, sorting is taking so much time because it has around 100 million records in it. |
When all you have is a hammer, everything looks like a nail.....
Maybe EZT is not the best tool for this requirement..... |
|
Back to top |
|
|
rohitsir
New User
Joined: 21 Aug 2007 Posts: 32 Location: USA
|
|
|
|
I have implied this logic finally.
I have made file1 a VSAM file ( it has only 500 records as compared to file 2 which has 100 million records).
I read file 2 first, and based on its account number, do a read on FILE 1. If a match is found, i write that record in to file 3.
NO need or sorting the file 2 in this case. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
Sounds like a plan
d |
|
Back to top |
|
|
stodolas
Active Member
Joined: 13 Jun 2007 Posts: 631 Location: Wisconsin
|
|
|
|
Well a single sort step could have taken care of this all in one step. Sorting the 2 files together and dumping to a 3rd file on matches. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19243 Location: Inside the Matrix
|
|
|
|
The problem being the 100million records that are not already in sequence. . . |
|
Back to top |
|
|
lcmontanez
New User
Joined: 19 Jun 2007 Posts: 50 Location: Chicago
|
|
|
|
FYI, you don't need a vsam file use a table for only 500 accounts.
SEARCH ACCTCODE WITH WS-ACCOUNT-NO +
GIVING XXXX
IF ACCTCODE
PUT file3 from file2
END-IF
This should work. |
|
Back to top |
|
|
|