Although the answer has already been given, I believe it is worth an explanation of why these inaccuracies occur.
The IEEE 754 standard (here and here), that defines floating point numbers, such as float
or double
(Java, C, C++, C#), or the general numbers of Javascript, treats numbers internally in a scientific notation, whose basis is always 2.
As the base is 2, and the mantissa should always be greater than or equal to 1 and smaller than the base, so it was eventually fixed at 1.
Thus:
- Paragraph 4 is treated as 1.0 2 2
- Paragraph 10 is treated as 1.25 2 3
- Paragraph 6.25 is treated as 1.5625 2 2
To store this value, however, the accuracy is not infinite, being limited to a specific amount of bits. float
uses 32 bits and double
(or Javascript numbers) uses 64 bits, divided as follows:
32 bits: s|eeeeeeee|23 × m
(1 bit for the signal, 8 for the exponent and 23 for the mantissa)
64 bits: s|eeeeeeeeeee|52 × m
(1 bit for signal, 11 for exponent and 52 for mantissa)
Since the base is fixed at 1, the mantissa stores only its fractional part.
To convert from binary notation to a decimal representation, each bit in the mantissa must be multiplied by a power of negative 2. The first bit must be multiplied by 2 -1, the second by 2 -2 and so on.
With that, a mantissa like 10010000000000000000000
(using 23 bits for simplicity), when converted to decimal becomes:
2 -1 + 2 -4 = 0,5 + 0,0625 = 0,5625
Since the number 1 is implied, then this mantissa is actually worth 1.5625
From there arise several problems with seemingly simple numbers.
The number 3.2, for example. In scientific notation based on 2, it becomes 1.6 2 1.
So far, no problem, however, the problem arises when converting this mantissa to binary. Its fractional part should be representable through a sum of base 2 negative powers. It turns out that 0.6 cannot be represented by a finite sum of base 2 negative powers. Note its binary representation (using 23 bits for simplicity):
10011001100110011001100
If divided correctly, one can perceive that it is a repetition of 1001
:
1001 1001 1001 1001 1001 100
The last repetition, however, is truncated for 100
. However, even if I wasn’t truncated, the sum of these powers would not yet be 0.6. Similar to what occurs when dividing 1 by 3.
To a computer using the data type float
, 3.2 is stored internally as 3.1999998092651367 (approx.).
With double
, the only difference is in the amount of 9’s that would follow 1 (there would be 9’s more). But still would be "wrong".
Now, eventually, seemingly "strange" numbers for us could be correctly stored. For example, 3.13671875.
Transformed into scientific notation: 1.2841796875 2 2.
Despite the seemingly large number of decimals, this number is perfectly storable in one variable float
or double
, because 0.2841796875 is a sum of four negative powers of 2:
2 -2 + 2 -5 + 2 -9 + 2 -10
In the 23-bit binary format: 01001000110000000000000
If you want to test other numbers, you can use online interactive material that I make available to my students: IEEE 754 Floating Point.
And here http://carlosrafaelgn.com.br/Aula/Flutuante2.html has four tutorials teaching to do this process manually.
There, I use the 32-bit format, but you can already get an idea of how the 64-bit format would work.
Related question: Inaccurate result in broken numbers calculation
– bfavaretto