I have set of UTF8 records which contains foreign characters. Due to foreign character (each contains 3 bytes) next fields got messed up. All the space value of X’40’ is replaced as X’20’ in UTF8 records feed.
Joined: 06 Jun 2008 Posts: 8438 Location: Dubuque, Iowa, USA
Your post is one of the most confusing I've read in a LONG time. For example, you reference X'40' as a space -- but that is EBCDIC, not UTF-8; UTF-8 uses X'20' for the space character. Furthermore, a pure UTF-8 implementation supports variable length characters up to 4 bytes long so why you are having problems with 3-byte characters is not clear. Is the data in EBCDIC and you are trying to convert it to UTF-8? Is the data in UTF-8 on another platform and you want to move it to the mainframe and convert it to EBCDIC? Your post is not very explanatory.
I think you need to go back to square one and explain your issue and the problem again, in more detail.
I am receiving the file in binary format which contains EBCDIC (english) and UTF8 data(foreign language in single field).
When i gave tso command DISPLAY UTF8. I can able to read the english fields and UTF8 field is not in readable format but the actual data has been preserved in hexadecimal format.
Now am trying to allign the fields after the UTF8 field which are collapsed due to 3 byte character. when am trying give (JFY=(SHIFT=LEFT) to allign the records in proper format it will not work, so, I am converting all the X'02' to X'40' to apply the command (JFY) to format. Please let me knw if there is any simplest way to format these records?
I'm not sure what you are trying to do. If multiple bytes in the input data get you a single byte in the "displayable" data, then you will have no problems lining things up. If you change anything, you will trash your data.
So, what are you trying to do, including why you are trying to do it?