Recently I joined a new team where we are using the Pipe Delimited files and we all struggle to verify the data if require. For example if we have to check a certain field then we use to download the file and map it to excel because the there are so many fields which are not populated so we have to map it to PIPE delimited in Excel to get that particular field number.
For Example:
Code:
AAAA|BBB||||||CCCC||||||DDDD
AAAA- 1st Field
BBB - 2nd field
CCCC- 7th Field
DDDD-13th field
Code:
3rd,4th,5th,6th,8th,9th,10,th,11th,12th field were not populated so there are only pipes no default values
I am thinking to automate this process so if we have to map any file we can do in mainframe itself. Thinking to write a Cobol program which will unstring the file in to the copybook provided only challenge is that how I will take the copybook for file in run time, Files we can pass to the jcl thru a rexx exec or so but how we will use the copybook against the file passed for expanding.
Does any one have any idea on that , Please let me know .
Please let me know if I am not clear in my requirement or need to give more examples.
Joined: 10 May 2007 Posts: 2455 Location: Hampshire, UK
Quote:
I am thinking to automate this process so if we have to map any DATA SET we can do ON the mainframe itself.
Quote:
Thinking to write a Cobol program which will unstring the DATA SET in to the copybook provided.
This translates to..
Quote:
unstring the DATA SET in to the code provided
as a copybook contains code - maybe declarations, maybe executable statements
Quote:
only challenge is that how I will take the copybook for file in run time, DATA SETS we can pass to the JCL thru a rexx exec or so but how we will use the copybook against the DATA SETpassed for expanding.
Since the copybook fields declaration we can provide in the Program only in working storage for any particular file , so I am looking the way that we can pass this or may be change it in run time to make it generic.
1- get the file name and copybook(field declaration of file ) name from user
2- Get the file name overridden in the Skelton JCL
3- Override(Some how) the copybook name of working storage in Cobol program and unstring the record for all the fields mention in copybook and write it to output (this file can be map in file manager) - This is the challenge for me as of now
4-create a temp copybook(without the pipes) in user's RACFID.TEMP.DSN to map the file
Please let me know your thoughts on this . Many Thanks
Explanation: The error occurred during processing of a LINK(X), XCTL(X),
ATTACH(X), or LOAD macro.
An incorrect load to global request was attempted, or the authorized routine
requested a module that could not be found in an authorized library. The
module was found either in an unauthorized library or already loaded in
storage but marked as coming from an unauthorized library.
Code Explanation
34 The user attempted to use a program while a program-accessed data set was
open.
Joined: 08 May 2006 Posts: 1193 Location: Dublin, Ireland
I don't believe your approach is feasible. You can't dynamically change the copybook in a pre-compiled program. If you re-compile for each time, your field names change, so you have major code changes.
What might work - but would involved a lot of work - is to have your Cobol program read in your pipelined dataset and also read in the copybook as a dataset. You then have to match the pipelined field in the first dataset with the appropriate copybook field in the second dataset. The field name from the copybok can then be output with the related value for the pipelined dataset
Things get complicated when you have nested fields and in bypassing lines in
the copybook such as comments, values etc.
Joined: 07 Feb 2009 Posts: 1306 Location: Vilnius, Lithuania
thesumitk wrote:
I have got below step to change it in run time but bad luck that it is not working
Don't spout bovine excrement, you have written some code that does nothing like changing a copybook at runtime, because that is not possible without recompiling the entire program.
In PL/I the problem would be (relatively) easy to solve by just passing your delimited file and a file with offsets of all fields to the program, which would allow you to build the "copybook" on-the-fly in an internal procedure, been there done that.
The comments and complicated tasks I will handle in rexx exec which will run before this program step , Is this a way to compile my program in JCL itself instead of ENDEVOR , I am thinking to read my program in REXX step each time and change the copybook name and then compile in next step and run in next step. will that be fine?
In PL/I the problem would be (relatively) easy to solve by just passing your delimited file and a file with offsets of all fields to the program, which would allow you to build the "copybook" on-the-fly in an internal procedure, been there done that.
Thanks Robert for your suggestion , I don't know Assembler but if you show me a way I can try that for sure , Please let me know if you have anything handy , Thanks
Joined: 08 May 2006 Posts: 1193 Location: Dublin, Ireland
Quote:
I am thinking to read my program in REXX step each time and change the copybook name and then compile in next step and run in next step. will that be fine?
No, that's an extremely complex and costly approach. How do you cater for copybooks where there are different numbers of variables, for example? The approach I suggested does not require recompile for each copybook. The one version of the program can cater for all copybooks, if coded correctly.
Joined: 07 Feb 2009 Posts: 1306 Location: Vilnius, Lithuania
Written on-the-fly, totally untested, other than that I know I did a similar thing in the past, and the second file contains length of fields, not offsets.
Code:
/* Roberts Q&D PL/I test program */
rahp: proc(parm) options(main) reorder;
dcl parm char (100) var;
dcl sysprint file;
dcl pipein file record input env(fb recsize (80));
dcl flenin file record input;
read file(flenin) into (flens);
do while (^eof);
i = i + 1;
a_flen(i) = flen;
read file(flenin) into (flens);
end;
call flenner();
flenner: proc;
on endfile(pipein) eof = '1'b;
dcl rec_80 char(80);
dcl 1 inrec,
2 fld_01 char(a_flen(01)),
2 fld_02 char(a_flen(02)),
2 fld_03 char(a_flen(03)),
2 fld_04 char(a_flen(04)),
2 fld_05 char(a_flen(05)),
2 fld_06 char(a_flen(06)),
2 fld_07 char(a_flen(07)),
2 fld_08 char(a_flen(08)),
2 fld_09 char(a_flen(09)),
2 fld_10 char(a_flen(10)),
2 filler char(80 - sum(a_flen));
eof = '0'b;
open file(pipein);
read file(pipein) into(rec_80);
do while(^eof);
/*
parse your inrec, splitting it on whatever character
and pit the pieces into fld_01 to fld_nn
verify the fld_nn fields
the first record of flenin can be set up to tell you the type
of file you're receiving, and even the LRECL and RECFM...
or use the parm passed to the main procedure
*/
Such tasks are difficult to automate and why automate? How much time does it take to get this PIPE delimited values to a excel cells ? 5 minutes or less. Once you make a shell of each data set ( you haven’t specified how many such you have) but you can do it once and reuse it.
Please explain in more detail why you need to unload it to a copybook and achieve what ?
Such tasks are difficult to automate and why automate? How much time does it take to get this PIPE delimited values to a excel cells ? 5 minutes or less. Once you make a shell of each data set ( you haven’t specified how many such you have) but you can do it once and reuse it.
Please explain in more detail why you need to unload it to a copybook and achieve what ?
This is to make our daily life easy and save manual hours each day our entire team is spending such scenarios when they/we have to research an issue/item and goin to those big/large files and to see the values of certain fields ,which require to download them andmap them in excel.
5 minutes or more , that depends on the size of the file .
AAAA- 1st Field
BBB - 2nd field
CCCC- 7th Field
DDDD-13th field
Code:
3rd,4th,5th,6th,8th,9th,10,th,11th,12th field were not populated so there are only pipes no default values
The easiest way, by far, is to use your SORT product.
First of all, get the Smart DFSORT Tricks pdf book (even if you have the other SORT product)
Next, read paragraph "Deconstruct and reconstruct CSV records"
Last, make your own (hint: drop the INREC and use PARSE in the OUTREC statement).
I've read all the posts again, from top to bottom and back, and I think I understand what you want to achieve.
1. You have many different CSV files (the fact that the fields are separated by a vertical bar instead of a comma is minor).
2. For each of these files you have a corresponding copybook.
3. You want to reformat these CSV files so each field appear in fixed columns (just like Excel does).
4. You want this process to be as automatic as possible.
Here is what I would do:
1. Take one file and its corresponding copybook.
2. Manually write the sort step as explained in paragraph "Deconstruct and reconstruct CSV records"
After this stage, I have a step looking like this:
8. Add some more code in the REXX program in order to deal with the 1st line (starting with INREC PARSE)
and the last one (ADDPOS instead of ENDBEFR, extra parenthesis).
9. Test again and again.
10. Repeat the write loop to generate the second part of your output. Generate now:
Code:
%01,C'|',
%02,C'|',
%03,C'|',
11. Modify the REXX program again to include the BUILD=( and the last parenthesis.
If you survive this, you will have the INREC PARSE and BUILD that will be tailored according to the copybook.
The whole job runs in 3 steps:
1. the cobol compile of the copybook,
2. the rexx program used to generate the INREC statement,
3. the sort processing the CSV file
The final output will be a nicely formatted list.
Before submitting the job, you will have to modify the COPY statement in the dummy cobol program and the file name in SORTIN of step 3.
I usually get paid for this kind of job. I will contact you before my next travel to your beautiful country
5 minutes or more , that depends on the size of the file
I thought one would only pick up bad ones only to troubleshoot and download them instead of the whole massive data set.
Second , when you transfer this pipe delimited data set to (from notepad) to CSV/Excel, don't that go to each cell when you say so ? I am sure it goes, and then take your copybook and transpose is horizontally as first line (header) to the same CSV/Excel, then you will have the mapping that you want with copybook name and data. I assume its readable data set and not a encrypted pipe delimited. To automate further , You can set up a FTP job to get into notepad and then a macro that pick up this notepad and put it into Excel/CSV and same goes for header.
Repeat the above process just once for other files and then you are done. We used to do this and this don't really consume time at all at least in our case.