I'm trying to convert some huge data files to comma delimited display files
using Sort, What I'm finding is that I will process maybe 200,000 records and then get a Soc7.
In general what I'm doing is converting PD or BI fields using
something like this:
21:15,5,PD,EDIT=(STTTTTTT.TT),SIGNS=(-,)
it works until I get the Soc7
I had thought to put in edit checks on the numeric fields. I've found examples on this forum that are something like this:
IFTHEN=(WHEN=(4,5,ZD,NE,NUM),
OVERLAY=(4:C'00000'),HIT=NEXT),
So I was thinking something like
IFTHEN=(WHEN=(15,5,PD,NE,NUM),
OVERLAY=(21:C'00000000000'),HIT=NEXT),
and then
IFTHEN=(WHEN=(15,5,PD,EQ,NUM),
OVERLAY=(21:15,5,PD,EDIT=(STTTTTTT.TT),SIGNS=(-,))
I've played around with it a bit and searched for other examples in this forum but not having any luck getting it to work
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hello,
Suggest you post a few examples of the records that cause the 0c7 and how you want the output form these. Post a few "normal" records as well and the output you want from them.
These do not need to be full-width records - only enough to show what you want to do.
Suggest you use FB/80 for the experiment and change to the "real" values later. If the file has variable length data, mention this as well.
Joined: 07 Dec 2007 Posts: 2205 Location: San Jose
steves,
DFSORT treats a 0, 2, 4, 6, 8, A, C, E and F in the sign bit as a positive sign, and 1, 3, 5, 7, 9, B and D as a negative sign. You'd need a bad digit (not 0-9) to get an S0C7. You can run VERIFY which will identify values with bad digits.
Check this link for detailed explanation of Verify Operand with examples
I'll try again, thanks
here's an example
in the screen print, the SORT processes the record starting with 618832 successfully and then abends with a S0C7 on the next record starting with 618833. There are 3 packed fields starting in column 15. If I isolate these and only include the field starting in column 15 for 5, it gets the Soc7 on record 618833. If my JCL sort only includes fields in columns 1 to 14 it completes successfully.
The data in these packed fields; 15 for 5, 20 for 5, 25 for 4 - all looks the same to me , so I don't understand why a SoC7
1. yes, too many t's on the 3rd field definition
2. the data I posted shows the record right before the soc7 (618832) and the record that it abends on (618833) . The entire record is displayed
it still gets the s0c7
if I remove the offending record, I get a soc7 on the next record, remove another, again, another soc7.
For this client , so far I've converted 3 other systems succesfully using Sort. One file had 27 million records and Sort was fine
For this system ( ie this set of input files) every file I've converted gets a soc7 after processing a number of records. I would assume that this system allowed users to enter bad data and the others didn't but I don't see the bad data. Futhermore, once it reaches the soc7 record, no matter how many downstream records I delete from the input record, it still abends at the same point, like its reached the number of records in can process, But its not getting a B37 space error and it doesn't matter if I change the file size
Joined: 23 Nov 2006 Posts: 19244 Location: Inside the Matrix
Hello,
Have you tried verify? Unless the incorrect record has been identified, i suspect it will run with no errors.
You might talk with your tech support and see if you are running the current version/ptf level as well as any fixes that specifically address an incorrect 0c7 abend.
Joined: 07 Dec 2007 Posts: 2205 Location: San Jose
steves wrote:
it still gets the s0c7
if I remove the offending record, I get a soc7 on the next record, remove another, again, another soc7.
Steves,
I am not sure as to how you arrived at the magical number of the bad record, but I can tell you that is NOT the record of concern as the hex values you show are perfectly valid. Please run VERIFY which will point you to the right invalid record.