I need your sugession and help for the below. Thanks in advance.
File struture
1. 1st 4 bytes is key field
2. 5th byte is indicator field. possible values are X, N & Y
Requirement
I have to merge the 2 files File1 & File 2 and create File3. File3 should have all records from both File1 and File2 except the first record with key value '0000'. A new record with key value '0000' has to be inserted into File3 that should have the total no of record in the File3. Also this record should have the no records with indicator X, N & Y in the file. Please note File1 & File2 may contain 400000 records. The part where I need assistance is to build the first record in File3. :-)
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
I think you can create a trailer record with the totals, but you would need to either re-sort or make another pass to get it back to the front. The Smart DFSORT Tricks has an example titled "Display the number of input or output records" which could give you some pointers.
Hi William,
Thanks. It is ok to have either header or trailer record. Please suggest how to get the record in the below format. eg:- Count of records with indicator X.
File3
9999 TOTAL=8 X=3 N=1 Y=4
Joined: 18 Nov 2006 Posts: 3156 Location: Tucson AZ
I'd guess that like the IFTHEN for the 'SUI', for each type of record you want to sum, you OVERLAY a unique counter with the 001. With 3 unique keys, that would be 3 counters and 3 TOTs.
The input files File1 & File2 are VSAM KSDS files. When I concatenate these two datasets in SORTIN I am getting VSAM open error (168). Please suggest how to solve this error. File3 is also VSAM KSDS file.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
Gee, that's kind of an important fact that you left out.
The system does not allow concatenation of VSAM files. KSDSs have a key of their own and can be treated as either fixed length or variable length depending on what the data looks like. Not knowing anything about your KSDSs, I can't really tell you how to change the job to do what you want to do. I'd need to know if the records were fixed length or variable length, how long they are, where the key is, etc.
Sorry for missing that vital info. Detailed info follows. Please let me know anything is missing.
There are actually 10 files (file0 - file9) each with 40,000 records. All files are VSAM KSDS, Key - 10 bytes ZD field, Record are fixed length, Record length = 3000 Bytes.
11th byte is indicator field. possible values are I, N & Y
Key range for file0 is 0000000001 - 0000040000
Key range for file1 is 0000040001 - 0000080000
Key range for file2 is 0000080001 - 0000120000
Key range for file3 is 0000120001 - 0000160000
Key range for file4 is 0000160001 - 0000200000
Key range for file5 is 0000200001 - 0000240000
Key range for file6 is 0000240001 - 0000280000
Key range for file7 is 0000280001 - 0000320000
Key range for file8 is 0000320001 - 0000360000
Key range for file9 is 0000360001 - 0000400000
Unlike the previous expamle I gave the key value for the header record in each file ( File0 - File9) is lowvalues.
The header record in files ( file0 - file 9) should not appear in the merged file.
The trailer record should have a key value of high values.
Trailer record structure is
" KEY IS HIGHVALUES" TOTAL=0000400000 I=0000200000 Y=0000100000 N=0000100000
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hi K-O-M,
I guess huge is in the eye of the beholder. My medium size files are 1-3 million records that are 19,470 bytes long. A couple of our "bigger" files are a bit shorter in length but contain over 100 million records. We do have to do some juggling to get around space issues. When the smoke clears, this will be a 10-12 tera-byte warehouse. . . .
My guess is that your system could handle the temporary space as you'd give it back as soon as the process was completed.
Hopefully, you will able to work with the VSAM directly
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
I think this DFSORT job may do what you asked for, but I didn't actually test it with VSAM KSDSs. I assumed that by low values you meant binary zeros and by high values you meant binary ones.
Its very interesting to read your reply. Do you process these files with record length 19470 in CICS?
We do have larger volume files like yours but we don't normally process them in CICS.
I work in a environment where we communicate with all types third party system using a lot of different communication protocols. For my system response time is very critical and so my general objective is to keep the file size as small as possible.
In this design I trying my best to reduce the down time as all these files has to made available to CICS. Preferance is not to scan these files multiple times.
Hi Frank,
Thanks a lot for your response. I will test this out tomorrow. If I remember correctly, TRAILER command did not work for VSAM files when I tried. Anyway I will check that one also. Thanks again.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
Quote:
If I remember correctly, TRAILER command did not work for VSAM files when I tried. Anyway I will check that one also.
I don't know of any reason why TRAILER1 wouldn't work for VSAM files unless you didn't generate the "key" in the trailer file that the KSDS expected. In this case, we are generating a high value key for the trailer record so that shouldn't be a problem, given that the key is actually the first 10 bytes of the record. I don't know how you defined the KSDS so I'm assuming when you say the key is in the first 10 bytes, you know that's where it actually is. If not, then you need to figure out where the key really is and change the job appropriately.
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hi K-o-M,
We don't use the "long" records in CICS. They were built many years ago by de-normalizing the online definitions so ad-hoc reporting could be easily done against them in batch. These became the history files and a new one is created each month. The online has only relatively current data. A lot of the work going on right now is to re-normalize the history files into a datawarehouse. The normalized version of the history data will be in the 10-12 tera-byte range. In addition to the "rolled-up" cubes, we are providing views so end-users will not have to navigate the underlying structures for their ad-hoc queries.
Joined: 15 Feb 2005 Posts: 7129 Location: San Jose, CA
I can't think of a way to do it in a single step. But you're doing a merge and a copy here, no sort, so it should be pretty fast. Is there a problem with doing it in two steps?
The job in which I am going to retrofit this 2 step is already very long with lot of other processing. My initial plan is to do this in 1 step.
Since it seems like we cannot acheive this in 1 step it is ok. Thanks a lot for your help.