<< Previous Message Main Index Next Message >>
<< Previous Message in Thread This Month Next Message in Thread >>
Date   : Fri, 17 Feb 1984 18:05:00 MST
From   : Kevin Kenny <Kenny%his-phoenix-multics.arpa@BRL.ARPA>
Subject: Turbo Pascal--first impressions

The problem that you're describing (where frac (1.23 * 100) isn't zero)
is the usual truncation error in binary arithmetic.  If they say that
they'll fix it Real Soon Now, they either are lying, or mean that they
intend to foul things up further; to someone who's doing numerical
analysis, the result is CORRECT (if it's very close to 0 or 1; you
didn't say what the result is, just what it isn't).

[flame on] I am getting awfully tired of people who say that decimal
arithmetic is "inherently more accurate" than binary.  This claim is
absolute rubbish. [blowtorch valve off again].

The problem, of course, is that there is no exact binary representation
for 1.23; the expansion is a repeating string beginning
1.0011101011100001010001 with the last twenty digits repeating.  The
fact that 1.23 can be represented as a finite-length string in decimal
leads people to claim that "decimal is more accurate." But, try
representing 1/3 in either system.  It doesn't go, does it?  Does this
say that we should all switch to the ancient Babylonian (base sixty)
system, where 1/3 can be represented exactly as <00>.<20>? I don't think
so.

The point is, that any number can be represented to any level of
precision (short of exact) in any radix.  No radix can represent all
numbers exactly; Georg Cantor proved that a long time ago.

I concede that there is a problem in dealing with bankers and other
people who expect dollars and cents to come out even.  But a dollar
amount isn't a floating point number at all: it's an integer number of
cents!  In COBOL and PL/1, there are facilities to deal with the idea
that an integer might have a "decimal point" in its displayed
representation.  In most other languages, you just have to remember that
a variable contains an amount in cents and convert it before
displaying. It's not that tough.  Really it isn't.

The floating point implementations that "don't have this problem" use
"fuzzy comparisons".  What this means is that if the difference between
two numbers is less than some arbitrary constant times the smaller one,
they are considered equal.  This keeps the bankers happy, but drives the
engineers up a wall; there's an implicit loss of (real) precision to
gain the (perceived) accuracy.

Enough said.  Just a one sentence summary:

COMPARING TWO FLOATING POINT NUMBERS FOR EXACT EQUALITY IS NEARLY ALWAYS
A MISTAKE, WHATEVER BASE THE MACHINE USES.

/k**2
<< Previous Message Main Index Next Message >>
<< Previous Message in Thread This Month Next Message in Thread >>