View previous topic :: View next topic
|
Author |
Message |
satheeshkamal
New User

Joined: 09 Jan 2007 Posts: 28 Location: Chennai
|
|
|
|
We have a JSON file that we created in UTF-8 encoding in Windows platform and then uploaded into mainframe in 'binary' (no translation) mode.
Because of this the end of line/carriage return is not as expected in EBCDIC.
We used JSON PARSE functionality in COBOL. The file was read using the COBOL READ statement and it does not automatically detect that the file is in UTF-8 encoding and because of that the JSON PARSE does not work as expected.
It works fine when each record is appearing in one line.
We are confused on how to make this work.
The reason we have UTF-8 encoding file is because JSON PARSE expects the file to be in UTF-8 encoding.
any help would be greatly appreciated... |
|
Back to top |
|
 |
Robert Sample
Global Moderator

Joined: 06 Jun 2008 Posts: 8695 Location: Dubuque, Iowa, USA
|
|
|
|
When I was doing XML in COBOL, I read each record of the input file and created a single 10-million-byte variable that contained the entire input. XML worked with the variable. I suspect you need to do the same -- if the input file has more bytes than one record contains, you don't have a lot of other options to process the input. |
|
Back to top |
|
 |
satheeshkamal
New User

Joined: 09 Jan 2007 Posts: 28 Location: Chennai
|
|
|
|
Thanks for the reply. I am thinking of reading each record and inspect for CR/LF and split the records before processing into json parse routine.
But, i still wonder why IBM would provide json parse functionality only in utf-8. They must have foreseen such issues.
as per my understanding, it is not possible to override codepage CCID for JSON PARSE to say input is EBCDIC. At least, that would have made things easier. |
|
Back to top |
|
 |
|