There is also Decimal floating-point arithmetic which has a larger range and better memory safety. Java, C#, Python, Ruby, etc. have built in support for it via Decimal.
Banks and big companies have to worry about round-off and fractions of a penny, so Decimal is a better solution for them. But the great unwashed like you and me will never have to worry about that, so either works.
As other people mentioned, things like the decimal structure works well, but you can also just use an int to store how many pennies something costs and convert it to dollars for display.
The same IEEE spec that introduced base-2 floating point models was updated in 2008 to include some base-10 models that eliminate these issues. Many languages already support them natively, as well as most database engines. Otherwise, you can probably find third-party-library support.
If you don’t have access to an IEEE decimal implementation, or if you just wanna be a rulebreaker, the common strategy is to just store only plain integers, and the precision level you want. So, say, if you’re just dealing with simple american dollars, you’d just make sure to always interpret the integer value as “cents”. If you need more precision than that, you might do “millicents”.
Yep. Open your browser’s console and do
.1 + .2
and you get0.30000000000000004
.One of the reasons not to use floating point when working with money.
What’s the right way to do money math without floats?
Fixed-point arithmetics
There is also Decimal floating-point arithmetic which has a larger range and better memory safety. Java, C#, Python, Ruby, etc. have built in support for it via Decimal.
Banks and big companies have to worry about round-off and fractions of a penny, so Decimal is a better solution for them. But the great unwashed like you and me will never have to worry about that, so either works.
Also known as “if you ain’t storing cents, you ain’t making sense.”
As other people mentioned, things like the decimal structure works well, but you can also just use an int to store how many pennies something costs and convert it to dollars for display.
Use a dedicated data type or library. Some languages also have something like python’s Decimal type
>>> .1 + .2 0.30000000000000004 >>> Decimal(".1") + Decimal(".2") Decimal('0.3')
The same IEEE spec that introduced base-2 floating point models was updated in 2008 to include some base-10 models that eliminate these issues. Many languages already support them natively, as well as most database engines. Otherwise, you can probably find third-party-library support.
If you don’t have access to an IEEE decimal implementation, or if you just wanna be a rulebreaker, the common strategy is to just store only plain integers, and the precision level you want. So, say, if you’re just dealing with simple american dollars, you’d just make sure to always interpret the integer value as “cents”. If you need more precision than that, you might do “millicents”.