0
I am having some problems when performing sums of decimal values in my application, the variations that occur are not of great difference, but as I am dealing with total amounts of money, I would need the sums not come broken into tithes.
I could go after some information about why this occurs, and in some articles in English I found the term used Floating-Point Arithmetic and with an explanation translated by me below:
Why my numbers like 0.1 + 0.2 do not result in a nice rounding sum of 0.3 but instead we get a weird result like 0.3 billion or 0.29 billion?
Because internally, computers use a format (floating point binary) that cannot accurately represent a number like 0.1, 0.2 or 0.3 as a whole.
When the code is compiled or interpreted, "0.1" is already rounded to the nearest number in this format, which results in a small rounding error, even before the calculation happens.
PS: The quote was summarized for an easier understanding, taking into account that the original explanation for such is more complex.
I used the double format for the sum of my values, where I found several rounding errors, after that I switched to float where I could have a better precision. Bearing these events in mind I would like to confirm if there is any format for working with money that is more accurate than the float.
For those who want to delve into the issue, follows a link to an article that explains in a very complete and complex way:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html