Posted: Thu Aug 13, 2009 1:06 pm Post subject: COMP-1 & COMP-2 variables memory representation

Hi,

I am trying to read floating point variables (COMP-1 & COMP-2) from a Mainframe/Cobol generated file.

My question is: How can I evaluate the precision of the floating point variable?
For example, in the program that populated the value, we have:
MOVE +0.099 TO N18. where N18 was defined as COMP-1.
This information is of course not available at the time of reading.

The (hexadecimal) value stored was (4 bytes): 40195810
Since 40 means the exponent = 0, I evaluate the decimal value of the field to: 1660954*16^-6 or +0.098999977111816...
How can I know that +0.098999977111816... is eventually +0.099 and not +0.0989999771 or any other rounded value i.e. How can I evaluate the precision of the floating point variable?

Joined: 06 Jun 2008 Posts: 8214 Location: Dubuque, Iowa, USA

Posted: Thu Aug 13, 2009 5:15 pm Post subject:

Terminology note: you appear to be more concerned about accuracy than precision. Google is your friend. Googling "accuracy and precision" gives you many hits on the differences between them.

The precision of the COMP-1 field is 24 bits or no more than 9 digits. From the COBOL Programming Guide manual (link at the top of the page):

Quote:

1.3.5.1.1 Conversions that lose precision

When a USAGE COMP-1 data item is moved to a fixed-point data item that has more than nine digits, the fixed-point data item will receive only nine significant digits, and the remaining digits will be zero.

When a USAGE COMP-2 data item is moved to a fixed-point data item that has more than 18 digits, the fixed-point data item will receive only 18 significant digits, and the remaining digits will be zero.

The accuracy of the value is how close you can get to the true value and pretty much what you see is what you get in COBOL. Unless you go to more precision (COMP-2 instead of COMP-1), the value presented is as close as COBOL can get. No, the value is not +0.099, it will not ever be +0.099, and you are not going to be able to make it exactly +0.099 no matter what you do. Computers do not support infinite numbers of digits in their numeric representations and some values require infinite numbers of digits to be accurately represented.