## Money is not a float

One suggestion I took to heart is that in order to be great, you need to work on fundamentals. It’s no good to be up to date with the latest and greatest, be they Agile techniques or new technologies, if you’re weak on fundamentals.

So I’m starting a collection of fundamentals, that is certainly not going to be comprehensive. Rather, it’s a random collection of things that I think are fundamental, yet many experienced developers get wrong.

Let us start with a surprising discovery: did you know that the number 1/10 *cannot be represented in a finite way* in base 2? Yep, it turns out that in base 2 the number 1/10 is periodical, much like the number 1/3 has no finite decimal representation in base 10. But what is the implication for us?

The implication comes when we make the mistake of representing a money in a floating-point number. Suppose you encode the amount of “ten cents” in the floating-point number 0.10. And now look at this program, and guess what happens when it runs.

```
``` public class MoneyIsNotAFloat {
public static void main(String[] args) {
double tenCents = 0.1;
double sum = 0.0;
for (int i=0; i<10; i++) {
sum += tenCents;
System.out.println("0.1 * " + (i+1) + " = " + sum);
}
}
}

(Hint: 1.0 times 10 equals… 0.99999999999999).

And this is not a Java problem. The same happens with any language, for it’s a matter of floating point arithmetic.

The simple fact is that floating-point arithmetic is **not exact**, therefore it * should not be used for representing money*!

What to use then? One simple solution is to use a plain int to represent an amount of cents. Integer arithmetic **is** exact. A 32-bit int should be enough for most applications. If you’re worried about overflow, use a BigDecimal type. Java has one, and most modern languages do too. (Just a note: if you use a Java BigDecimal, remember that you should not compare them with “equals”, you must use “compare”. Go figure.)

November 18th, 2009 at 02:40

A few simpler examples…

#include

int main()

{

printf( “%g\n”, 0.3 – 0.2 – 0.1 );

printf( “%g\n”, 0.3 – ( 0.2 + 0.1 ) );

printf( “%g\n”, 0.4 – 0.3 – 0.1 );

printf( “%g\n”, 0.4 – ( 0.3 + 0.1 ) );

return 0;

}

December 11th, 2009 at 16:59

It’s interesting to note this behaviour with (j)ruby:

ten_cent = 0.10

sum = 0.0

10.times {sum += ten_cent}

puts “sum is #{sum}”

That results in “1.0” with “native” ruby, and 0.9999 with JRuby.

May 3rd, 2010 at 20:51

An update: I found this very good article in Stephan Schmidt’s blog: http://codemonkeyism.com/once-and-for-all-do-not-use-double-for-money/

September 6th, 2010 at 23:37

Some notes on other popular JVM languaes:

1) Groovy does the “right thing” by default

this.a = 0.3

===> 0.3

this.a * 3

===> 0.9

this.a.class.name

===> java.math.BigDecimal

2) Scala behave exactly like Java:

scala> val cent = 0.3

cent: Double = 0.3

scala> cent * 3

res0: Double = 0.8999999999999999

However, using BigDecimal is straightforward thanks to the implicit conversion:

scala> val cent : BigDecimal = 0.3

cent: BigDecimal = 0.3

scala> cent * 3

res1: scala.math.BigDecimal = 0.9