We have a requirement to export mainframe data to a windows based system.When the mainframe data is interpreted by the windows system then certain non display or non ASCII(I am assuming) values are causing failures in the insert statements.For example X'FD' is getting rejected.Since the data is large even if we remove one hex value we might miss another.Is there any way to write a cobol program that would scan the mainframe input and generate a list of all the Hex values which are non ASCII and might get rejected by their system and their positions in the data.We could then substitute these with windows legal values.
I am assuming the collating sequence for MF is EBCDIC while that for something like ORACLE is ASCII and hence we are having this problem.So what could be legal in the MF collating sequence is not for ORACLE.Kindly let me know if I am wrong in the assumption.
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
"Every" hex value is an ASCII value. One problem is that the hex values on the mainframe do not have the same meaning on ASCII platforms.
In the ASCII world, values "below spaces" are where the ASCII control characters are defined (an ASCII space is an x'20' - values from x'00' to x'1F' are "below spaces". This is not so in the mainframe - mainframe does not use embedded control characters.
I would suggest that when the file is created on the mainframe, all values are converted to delimited text. There should be no binary, packed decimal, or other character that is just a "bit pattern" sent to the target platform. Dealing with these types of fields is much easier on the mainframe (unless you have 3rd part software that will handle this - the products i've seen do not yet do everyting needed).