View previous topic :: View next topic
|
Author |
Message |
zack786
New User
Joined: 22 May 2007 Posts: 3 Location: New York
|
|
|
|
Need help. I am reading a SYSIN PDS member file and loading a fixed length (80 bytes) table with 100 entries. But I need to check the SYSIN member to make sure there are no duplicate entries, before I load it to the table in memory.
What is the best way to accomplaish DUP CHECKING in COBOL. Thanks. |
|
Back to top |
|
|
William Thompson
Global Moderator
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
|
|
|
|
Is the input sorted? Without it being sorted, you will need a 100 byte internal table, but if you are building a table, just scan prior to inserting..... |
|
Back to top |
|
|
zack786
New User
Joined: 22 May 2007 Posts: 3 Location: New York
|
|
|
|
No the input is not sorted. How do I scan prior to loading ? BINARY SEARCH ? |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Read the external file, look in the table to see if the value is already there, if it is, skip that entry (if you're ignoring dups), as each value from the external file is read, it is loaded into the "next" available position in the table as long as it is not a duplicate.
I'd suggest making sure the external file is sorted before you build and use the table. |
|
Back to top |
|
|
zack786
New User
Joined: 22 May 2007 Posts: 3 Location: New York
|
|
|
|
Thanks for your reply. OK I am sorry if I did not make myself clear. The table that I load table into memory stays in memory for the duration of the time-period that the job is up and running. The allocated memory block is cleared when the job comes down. Then again, I do the load (read sysin, check for dups, load into memory) when the job is submitted. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
You're welcome.
Yes, i believe i understood that the external file would be read, loaded into the internal table, and be available for the duration of the run.
What i'm not sure of is whether you have what you need to proceed or if there remain any questions. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
Why not sort first and let the sort eliminate key dupes using "SUM FIELDS=NONE"? |
|
Back to top |
|
|
p_gandhi
New User
Joined: 20 Apr 2007 Posts: 14 Location: TORONTO,ONTARIO,CANADA
|
|
|
|
I had the same kind of projects where I hae to eliminate the duplicates from the DB2 table, what I did as follows
1) Sort the file by keys
2) read the file
in main process
when keys not = previous keys
write to output file
read next
this way you writing file when keys are not same and when the keys are same or not you reading next and move the previous key to WS for comparing |
|
Back to top |
|
|
Raphael Bacay
New User
Joined: 04 May 2007 Posts: 58 Location: Manila, Philippines
|
|
|
|
That idea by Phrzby Phil looks like an efficient solution to your problem. |
|
Back to top |
|
|
|