Somebody told me the other day, since money is made from paper, and paper comes from trees, actually, money *does* grow on trees! Anyway, I digress.
There's also a software adage that says "money does not float". What this means is that for currency calculations you should never use a 'float', or floating point arithmetic. I bet we've all done it. We automatically think that a double or float will be fine for working with amounts, like money. If the amounts are all whole numbers then a long or int would suffice, but for working with amounts, floating point arithmetic is probably the worst option you can take. But I bet there's probably some of your code, out there, in production, happily purring along - using floats. Well be afraid, be very afraid.
An Example of the Problem
For starters, consider the following. What would the output of the following line of code be do you think?
System.out.println(1.20f - 1.00f == 0.20f);
Go on, take a guess. 'true', or 'false'? That's right, of course it's a trick question, and of course it outputs a big, fat, ugly: false
That's the problem with floating point arithmetic. It's not precise. If you've got financial code using floats, and you're making decisions in your code based on the type of logic above, you're screwed. :-) Therein Lies The Rub
So if you're anything like me, and you're not a mathematician, you're thinking "but why?" Glad you asked. It turns out that floating point arithmetic is quirky in computers. It stems from the fact that computers store numbers in binary format. Makes sense I guess. Now on the face of it, this doesn't seem problematic, but take for instance the number "1/3". How do we write that in decimal? To be totally accurate, we would have to write 0.3333333333333 and so on and so forth into eternity. But since we have limited patience and limited paper, we 'approximate' the number. We write as many 3's as we think are important to write. Anything beyond that, who cares, we'll suck it up. But, fact is, it's therefore not a precise number, it's simply an approximation, but in our estimation, an acceptable approximation. If I want 1/3 + 1/3, I'm happy with 0.66, even though the correct answer is 0.6666666 into eternity. In computers, with converting decimal numbers to their binary representation, there are similar problems. For example, there's no exact binary representation for the decimal '0.1'. So what you get is a repeating 0110 0110 0110 0110 up to a point, depending on the bit size. Most computers use a standard called IEEE 754 for representing floating point numbers in binary. With this decimal to binary conversion there are often rounding errors that creep in and depending on the circumstances, this can come back to bite you. I won't go much further into this. Joel On Software gives a great explanation of it with his article on the Excel bug which was influenced by this decimal to binary oddity. Read it here. The point I wanted to bring home, is that using floating point numbers for currency calculations can cause rounding errors depending on what the calculation is and what that translates to in binary.
Let's look at an example:
So why use the darn thing then? Why floating point arithmetic?
Well for one things it's fast. Using floating point arithmetic on computers is apparently a big deal. When you hear computer speed benchmarks referred to as flops, or megaflops, or teraflops, guess what? It's a measure of how many floating point operations can be done per second. Somehow that's a big deal to us geeks. :-)
Big Brother, I mean, Decimal To The Rescue
You've read this far, and you're about to throw your hands up in despair. "What the heck do I do then?" you retort. Indeed. What do we use? Well it seems the best thing to do when working with something like currency, is to work in the lowest unit of that currency and work the conversion magic in the presentation, be that in a user interface or in a paper report. So you do all your calculations in cents, but display the result in $'s. And apparently, a class in Java (and I believe in some other languages) is BigDecimal. BigDecimal doesn't use floating point arithmetic, therefore it is accurate. You may be wondering why we don't just always use this. Well it's a lot slower than floating point arithmetic. So it's a trade-off.