View previous topic :: View next topic
|
Author |
Message |
raveendra_ibm Currently Banned New User
Joined: 07 Jan 2006 Posts: 31
|
|
|
|
Hi All,
We have the following requirement :
We have a file with the following layout
Client Number 3 digit
Product ID 5 digit
Each Product can be owned by more than one Client and
each Client can have more than one Products.
Sample Data : (File A)
Code: |
ClientNo ProductID
231 56780
232 56780
233 56781
233 56782
234 56783 |
We read another file (File B) contains Client Number and I need to get
the products owned by this from File A and again get all the owners of the product.
Here is the example:
File A contains entries as above.
File B contains -
Client Number
232
Now I read File B and get products from File A
Client Number 232 ---> Product 56780
Now I need to get the owners of the Product 56780
Product 56780 ----> Client Number 231 and 232
Here is what I could think of ...
Create a KSDS VSAM file with Client Number and create an alternate index on Product ID.
Could you please let me know if the same could be achieved by any other way.
Thanks,
Raveendra. |
|
Back to top |
|
|
William Thompson
Global Moderator
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
|
|
|
|
Alot of the design would depend on sizes; The number of file B requests; The number of file A owner/product entries; The number of products a client might own; The number of times you might need to do this lookup. |
|
Back to top |
|
|
Pete Eagleton
New User
Joined: 14 Feb 2007 Posts: 11 Location: Chicago
|
|
|
|
read 'FILE A' and just load it into an internal table. Then you can read 'FILE B' and search the table several different ways. Find all the products the 'client'( from file b) owns and store the found products then re-search the table for all the clients that have those products. |
|
Back to top |
|
|
raveendra_ibm Currently Banned New User
Joined: 07 Jan 2006 Posts: 31
|
|
|
|
Thank you William and Pete for your responses...
Here are a few more details of the scenario.
The fields are S9(9) COMP-3 and S9(11) COMP-3 respectively. We have around 50,000 entries in FILE A and the number of FILE B requests could be around 30,000, we need to do this lookup atleast twice for each entry in FILE B.
Given these statistics I dont think an internal table could be of any help to us.... Could you please let me know whether creating a VSAM file with the Client Number as key and Product ID as alternate key, the only appropriate solution for this.
Thanks,
Raveendra. |
|
Back to top |
|
|
Pete Eagleton
New User
Joined: 14 Feb 2007 Posts: 11 Location: Chicago
|
|
|
|
doing a KSDS is not the only solution (but maybe the better one). If you can afford the system overhead you could just read 'file a' sequenially for each lookup. it would be reading the entire file many times but is another way to do it. Best is, if available, is to put 'file a' into a DB2 structure. SQL can be a wonderful thing. |
|
Back to top |
|
|
William Thompson
Global Moderator
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
|
|
|
|
raveendra_ibm wrote: |
Given these statistics I dont think an internal table could be of any help to us.... |
I agree, a VSAM KSDS with an AIX is a good choice.
You should look at things, owners to products and products to owners. KSDS requires unique keys so you have to decide how to structure it.
My first guess would be a KSDS record of three fields: owner, product, owner (actually it could be product, owner, product if the ratios warranted it). put the primary key on the first two fields and the alternate on the last two fields and it should work.
Read all products for and owner and then read all owners for a product. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
Hello,
If your site uses DB2, you could create a temporary table and do what you want. This would be similar to your vsam approach but would done within the database environment.
If you want an alternative that does not use vsam or database, you can also do what you want using 2 passes of the orignal file. Make one pass with the fileA sorted by client and product matching against fileB selecting all of the records from fileA for the clients in fileB creating a new fileC containing the product numbers for all of the matching clients.
Sort fileC by product id discarding duplicates and fileA by product id; then match them giving you all of the clients who have each product id. |
|
Back to top |
|
|
TG Murphy
Active User
Joined: 23 Mar 2007 Posts: 148 Location: Ottawa Canada
|
|
|
|
Here is another approach to consider.
1. Sort the file by Product ID.
56780 231
56780 232
56780 233 ( I added this record myself to better illustrate step 2.
56781 233
56782 233
56782 231 (I added this one too...
56783 234
2. Create a COBOL program that loads all records for a given product into a table. For the simple data shown above, it would first load these records...
56780 231
56780 232
56780 233
Note 1: In order to do this the program must use standard "control break" processing - sometimes called "set" processing.
Note 2: Assumption here that the data for the biggest product would fit into a table. If not, then this approach won't work.
The program would then use this table to write out the following records:
56780 231 232
56780 231 233
56780 232 231
56780 232 233
56780 233 231
56780 233 232
Alternatively, if volumetrics allow you could write the above records to a variable length file as follows...
56780 231 232 233
56780 232 231 233
56780 233 231 232
However, let's stick with the first suggestion.
You can see that we've cross-indexed the data.
Then the COBOL proceeds to do the same with the remaining records. When the COBOL program terminates, it would have written out the following records:
Prod Cus1 Cus2
56780 231 232
56780 231 233
56780 232 231
56780 232 233
56780 233 231
56780 233 232
56781 233
56782 231 233
56782 233 231
56783 234
3. Sort the new file by Cus1.
Prod Cus1 Cus2
56780 231 232
56780 231 233
56782 231 233
56780 232 231
56780 232 233
56780 233 231
56780 233 232
56781 233
56782 233
56783 234
Now you need another program to process this new file along with File B.
I would hazard a guess that this approach will out-perform the KSDS approach - perhaps significantly. However, the work involved is significantly more substantial as well. Under most scenarios - KSDS approach is best solution because it is far less costly to build and maintain than the beast that I just described. However if performance is your primary aim, then maybe.... |
|
Back to top |
|
|
raveendra_ibm Currently Banned New User
Joined: 07 Jan 2006 Posts: 31
|
|
|
|
Thank you Pete, William, Dick and Murphy for all your suggestions.
Pete, its not feasible for our shop to go for DB2, so we are zeroing in to KSDS approach.
Murphy, thanks indeed for the detailed explanation.
So finally going for KSDS with an AIX
Thank you all.
Regards,
Raveendra. |
|
Back to top |
|
|
dick scherrer
Moderator Emeritus
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
|
|
|
|
You're welcome - good luck |
|
Back to top |
|
|
|