View previous topic :: View next topic
|
Author |
Message |
sojivarkey
New User
Joined: 07 Dec 2019 Posts: 1 Location: USA
|
|
|
|
Hi,
I'm working on a project where we have to move around 300K datasets to a Cloud location. Many of these datasets contain packed Decimal data but identifying the copybooks for all these 300K datasets is going to take forever. My question is , is there any tool or utility available that will tell me if a particular dataset has any COMP-3 data in it. If I can have the datasets with COMP-3 data identified, I can just focus on those datasets alone to pick a copybook rather than looking for 300K datasets.
Any help is much appreciated.
Thank You |
|
Back to top |
|
|
Robert Sample
Global Moderator
Joined: 06 Jun 2008 Posts: 8700 Location: Dubuque, Iowa, USA
|
|
|
|
There is no way to tell if data is packed decimal without an external reference. X'97994D' could be "pr(" if it is character data but it could be packed decimal -97994. Which is correct? Only a program or copy book can tell you. |
|
Back to top |
|
|
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1042 Location: Richmond, Virginia
|
|
|
|
A few ideas here from a retiree who can therefore no longer test these ideas.
Perhaps a (small?) number of datasets could be confirmed as to not having packed decimal data if they have no nybbles containing valid rightmost nybble values in the same positions of all records.
Datasets with variable length records would be a problem.
I don't remember if all low-values (so rightmost byte x'00') is handled as zero for certain arithmetic operations. If so that would be a problem.
Spaces (x'40') in what should be packed decimal fields would also cause an analysis problem if programs know that can happen and check for numeric before using such fields, so some records might have x'40' in a rightmost byte where other records show packed decimal. |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2119 Location: USA
|
|
|
|
sojivarkey wrote: |
Hi,
I'm working on a project where we have to move around 300K datasets to a Cloud location. Many of these datasets contain packed Decimal data but identifying the copybooks for all these 300K datasets is going to take forever. My question is , is there any tool or utility available that will tell me if a particular dataset has any COMP-3 data in it. If I can have the datasets with COMP-3 data identified, I can just focus on those datasets alone to pick a copybook rather than looking for 300K datasets.
Any help is much appreciated.
Thank You |
How do you plan to check "around 300K datasets" in TSO/ISPF environment (as per the forum you have chosen)? |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2119 Location: USA
|
|
|
|
In general, it is possible only to detect the "text-only uppercase only" datasets.
All other types of data can be anything; cannot detect without a copybook. |
|
Back to top |
|
|
|