View previous topic :: View next topic
|
Author |
Message |
selvamsrinivasan85
New User

Joined: 09 Aug 2010 Posts: 36 Location: Chennai
|
|
|
|
Hi,
I am having a requirement to change the invalid EBCIDIC characters to Spaces, where I am having around 55 EBCIDIC hex values to convert to spaces. I did like below.
Sample Invalid EBCIDIC hex values: 00, 01, 02, 03, 04, 05, 06, 07, 0A, 0B..
Code: |
01 WS-INV-BINARY00 PIC X(1) VALUE X'00'.
01 WS-INV-BINARY01 PIC X(1) VALUE X'01'.
01 WS-INV-BINARY02 PIC X(1) VALUE X'02'. |
Written below code:
Code: |
INSPECT OUTPUT-REC CONVERTING WS-INV-BINARY00 TO WS-SPACES
INSPECT OUTPUT-REC CONVERTING WS-INV-BINARY01 TO WS-SPACES
INSPECT OUTPUT-REC CONVERTING WS-INV-BINARY02 TO WS-SPACES
INSPECT OUTPUT-REC CONVERTING WS-INV-BINARY03 TO WS-SPACES
INSPECT OUTPUT-REC CONVERTING WS-INV-BINARY04 TO WS-SPACES |
Output rec is a length of 3130 and the file contains 150,000,000 records.
It took around more than 10 hours for completion. Is there any other possiblities there to optimize the code?. ALTSEQ / FINDREP on SORT is not helpful. it took around 17 hours to complete. |
|
Back to top |
|
 |
sergeyken
Senior Member

Joined: 29 Apr 2008 Posts: 2191 Location: USA
|
|
|
|
The only tool which is faster than SORT utility I can guess is: writing a quite simple Assembler code, where a single machine instruction TR (or TRL) can fix the whole record at once.
BTW: how long it takes only COPYING the dataset, without any update?
There is a good chance that 99.99% of the whole 10 (or 17) hours are used to perform the I/O operations, while only 0.01% used for bad bytes correction... If so, no improvement of bytes replace operation would help.
You can try to increase BUFNO parameters, for both input and output datasets (in the // DD statements). |
|
Back to top |
|
 |
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1052 Location: Richmond, Virginia
|
|
|
|
Just a minor point: it's EBCDIC.
Also, since so many or your "invalid" values can be described by intervals, is there some way you can compare this way? |
|
Back to top |
|
 |
selvamsrinivasan85
New User

Joined: 09 Aug 2010 Posts: 36 Location: Chennai
|
|
|
|
sergeyken wrote: |
The only tool which is faster than SORT utility I can guess is: writing a quite simple Assembler code, where a single machine instruction TR (or TRL) can fix the whole record at once.
BTW: how long it takes only COPYING the dataset, without any update?
There is a good chance that 99.99% of the whole 10 (or 17) hours are used to perform the I/O operations, while only 0.01% used for bad bytes correction... If so, no improvement of bytes replace operation would help.
You can try to increase BUFNO parameters, for both input and output datasets (in the // DD statements). |
Thanks Sergeyken. Will include the BUFNO parameter. |
|
Back to top |
|
 |
selvamsrinivasan85
New User

Joined: 09 Aug 2010 Posts: 36 Location: Chennai
|
|
|
|
Phrzby Phil wrote: |
Just a minor point: it's EBCDIC.
Also, since so many or your "invalid" values can be described by intervals, is there some way you can compare this way? |
Hi Phil,
Could you able to suggest a better way. Can all these invalid characters be kept in a single variable. I tried by placing in 88 variables and referred the 01 variable. But it didn't work out.
Or hard coding all 55 EBCIDIC invalid characters in INSPECT will work out?. |
|
Back to top |
|
 |
Phrzby Phil
Senior Member
Joined: 31 Oct 2006 Posts: 1052 Location: Richmond, Virginia
|
|
|
|
Sorry - been away too long (decades) - just pointing out something to think about.
Again: EBCDIC. |
|
Back to top |
|
 |
sergeyken
Senior Member

Joined: 29 Apr 2008 Posts: 2191 Location: USA
|
|
|
|
selvamsrinivasan85 wrote: |
Thanks Sergeyken. Will include the BUFNO parameter. |
I did not mention a more obvious issue: BLKSIZE needs to be big enough, to reduce the number of required I/O (e.g. EXCP runs). |
|
Back to top |
|
 |
|