Float and double types in Java are approximations – they don’t represent exact values. This is by design to allow faster calculations of approximate values at the cost of exact accuracy.
If you are new to Java, at some point you will run into this, or you’ll come across someone else you work with who may tell you to never use floats and doubles to represent Money, but maybe you’re not sure why. The reason is because of how floats and doubles represent approximate values.
If you’ve never come across this before, try this experiment:
float result = 0.01f + 0.01f + 0.01f;
You would expect this calculation to represent 0.03, but if you compare the value of result with 0.03f you’ll find this snippet of code unexpectedly prints false:
if(result == 0.03f){ System.out.println("true"); } else{ System.out.println("false"); }
To represent accurate floating point values use BigDecimal. Alternatively, money values can be represented as an integer value in cents or pennies (for example) to avoid the approximation issues.
tldr; Don’t use floats and doubles to represent money values in Java.