Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
Why do we have two different characters represting the positive values?
Because 50 years ago the IBM "builders" decided that there would be 3 signs - 2 positive (C and F) and one negative (D). This was a carry-over from the way signs were implemented on "punched cards" (which was handled by an "overpunch").
DFSORT allows either of the positive signs to be specified as the default.
Joined: 03 Oct 2009 Posts: 1775 Location: Bloomington, IL
Strictly speaking, a X'F' (all 1's) in the sign nybble is not a positive sign; it is the lack of a sign. In pretty much every case that I can think of, it is treated as equivalent to a positive sign, but you can detect the different bit pattern and, of course, a bitwise comparison will correctly detect X'123C' as different from X'123F'.
Further, there are actually more sign possibilities.
A, B, C, D, E and F are all valid signs.
IBM splits these into two groups, "preferred signs", which are C, D, and F and "non-preferred signs" which are A, B and E.
You end up with three for positive, two for negative, and one for unsigned (which is always treated as positive when a value of that sign is used).
At the level of the OP-codes, all arithmetic operations accept any of the signs, but will only generate results with a sign of C, positive, or D negative. You (or a compiler) have to do the F bit yourself (an Immediate Or with a mask of X'0F' or X'F0' depending on whether packed or zoned is required respectively).
Thus, the non-preferred signs are accepted, but their existence can be somewhat ephemeral.
If you ever see a non-preferred sign, start worrying and check it out, as it might be messing things up (you are unlikely to see one).
Don't ever let and F get into the sign of a field which should actually be signed (should contain a C or a D) as this can also cause problems.