View previous topic :: View next topic
|
Author |
Message |
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3053 Location: NYC,USA
|
|
|
|
Is it possible to merge all the generations with different LRECL? I have a GDG which as LRECL=1200 till last month and one of the process takes the GDG base as a input and copy the data of entire generations into one file. Now starting this month the LRECL has increased to 1500 so now it is not possible to achieve the same processing , I tried with INREC OVERLAY to make all old one's to LRECL=1500 but that is not working.
Code: |
CE043A 3 INVALID DATA SET ATTRIBUTES: SORTIN LRECL - REASON CODE IS 05 |
Last option is to get old generations changed to LRECL=1500 ( One time job) and then run the new one with LRECL=1500.
Any Advise. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
What do you mean by "merge", an actual merge, or copying the datasets one after another?
Presumably old, short-record, generations are dropping off one-at-a-time? |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3053 Location: NYC,USA
|
|
|
|
Copying datasets one after another. We do retain up to 30 generations. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
You have 30 runs then before the problem clears itself.
You could use either MERGE or JOINKEYS to simulate a concatenation, with different fixed-length LRECLs (then you can pad the short records to 1500 bytes) but you would need to change (or generate) the JCL for each run.
The alternative, as you have mentioned, is to modify the old data. However, the work would only be necessary, as in having an impact on the system, for 30 runs.
We can't have sufficient knowledge of your system and how that dataset is used to be able to advise which of those options best suits you.
For instance, it would be possible to create one dataset of converted "old" generations and use that, having renamed the originals. But we can't tell if something like that would be useful to you.
You need to weigh the amount of work against the need during those 30 runs, and go with what seems best in terms of cost, or run-time, or something that we can't know. |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3053 Location: NYC,USA
|
|
|
|
Yes, I don't see any other way to achieve it. Thanks Bill. |
|
Back to top |
|
|
Bill Woodger
Moderator Emeritus
Joined: 09 Mar 2011 Posts: 7309 Location: Inside the Matrix
|
|
|
|
Although, if you have SAS, I think it can do it without caring.
You could also write a "subsystem" to use for the IO, so you handle it yourself, although for I've never seen anything but vendor-software doing that. |
|
Back to top |
|
|
Rohit Umarjikar
Global Moderator
Joined: 21 Sep 2010 Posts: 3053 Location: NYC,USA
|
|
|
|
Quote: |
Although, if you have SAS, I think it can do it without caring. |
Yes we have SAS, but it consumes too much of CPU + Already we have a SORT in place and I trust it is not good to change the way SORT is working now and it wouldn't get any approvals also. However this option is good in case if this doesn't go to production and stay's only in testing regions or to create onetime production report. So thanks.
So as of now we decided to have a onetime job as explained above which looks easy and handy. |
|
Back to top |
|
|
|