View previous topic :: View next topic
|
Author |
Message |
G1NXU6T
New User
Joined: 22 Jun 2009 Posts: 11 Location: India
|
|
|
|
Hi
I am using a KSDS file and I have an alternate index defined on it with Nonunique option, so my alternate index value field has duplicates.
My requirement is that I have to read this VSAM file Sequentially and if there are any duplicate records in the file I have to delete them.
I know it can be handled by JCl, but I have to check it through Cobol Program only. Is this possible.
Can someone please help me with this.
Thanks in advance. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello and welcome to the forum,
Why does someone believe the file needs to be read "sequentially"?
If there are 3 records with the same duplicate key are all 3 of them to be deleted?
Quote: |
I know it can be handled by JCl, |
No, it cannot. . . All jcl does is execute programs.
Yes, once what is needed is properly defined, you can process the file using cobol. |
|
Back to top |
|
|
G1NXU6T
New User
Joined: 22 Jun 2009 Posts: 11 Location: India
|
|
|
|
Apologies Dick, By JCl I mean Sort utility.
If there are three records then we need to delete 2 records. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Quote: |
Apologies Dick, By JCl I mean Sort utility. |
Not to worry
As long as people understand this - unfortunately, we have many who believe jcl alone can do "things". . .
I still don't understand the sequential "requirement". What does this provide?
How many records are in the file?
Given that one of the duplicates should be kept and given that they are for different unique keys, which unique key should be kept? If all duplicates were to be deleted, this would not matter. |
|
Back to top |
|
|
G1NXU6T
New User
Joined: 22 Jun 2009 Posts: 11 Location: India
|
|
|
|
By sequential I mean reading the file from the top.
It is not mandatory that I have to read it sequentially, it can be read in any mode.
At present there are 10K records in the file.
(Unique Key value) (alternate key value)
1.............................. 3
2.............................. 3
3.............................. 3
If this is how the data is in the file then I have to preserve the data with the unique Key value '1'. Data with unique key value 2 and 3 should get deleted. |
|
Back to top |
|
|
dbzTHEdinosauer
Global Moderator
Joined: 20 Oct 2006 Posts: 6966 Location: porcelain throne
|
|
|
|
PERSONAL OPINION ON
Here we go again, requirements
G1NXU6T,
Is this stupidity homework? or is this a real work situation?
If this is homework, you should own-up to it.
if this really is a work situation
is the same fool that designed this alternate index the same one telling you to do this with a COBOL program?
This can be done very, very easily with SORT;
I would venture to say, it would be much faster by using SORT.
Has any thought been made to the fact that unless the design of the alternate key is changed, this process of removing duplicates will need to be run after every incidence of a new record being added to the file in another process?
PERSONAL OPINION OFF |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
But you can not have duplicate records within a KSDS |
|
Back to top |
|
|
G1NXU6T
New User
Joined: 22 Jun 2009 Posts: 11 Location: India
|
|
|
|
dbzTHEdinosauer,
I am analysing one design in my project and some one has proposed this solution (Using cobol program to remove the duplicates), which sounds weared to me.
I know this can be done very easily using SORT, this I have already mentioned in my previous post, but I have no clue if and how we can achive it through Cobol Program.
Before taking any action i just want to get educated if at all this solution is possible ( Though it is not a good solution at all).
I did some of analysis but not able to find any solution, So i have posted my query on this forum. |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8797 Location: Welsh Wales
|
|
|
|
Huh. Maybe the OP should state clearly which file it is that has to be read. Is he talking about duplicates in the KSDS or in the AIX.
But, even so, there will not be duplicate records in the AIX either.
|
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
Hopefully, someone "there" completely understands what is actually needed. At this point, i don't believe i do
If there is more to do in this process than merely discard duplicates, i believe writing code is more appropriate.
Quote: |
Code: |
(Unique Key value) (alternate key value)
1.............................. 3
2.............................. 3
3.............................. 3 |
If this is how the data is in the file then I have to preserve the data with the unique Key value '1'. Data with unique key value 2 and 3 should get deleted. |
How was "1" chosen to be kept? What about all of the data in the other records? What if the duplicates were not in consecutive unique keys? How is it all right to delete these with no consideraton for the other data in these records? My confusion/concern is that if these are simply deleted, this may cause a problem with the data that will be difficult to identify/correct at some later time. . . |
|
Back to top |
|
|
|