Numbers in Javascript are double
s, that is to say, floating point with "double" (64 bit) precision. This means that the largest accurately representable integer is 2^53 = 9007199254740992
(larger numbers can be represented, but with "holes" between one and the other).
If you need to represent bigger numbers than 2^53
, then I suggest looking for a library of "big number" (or "big integer", or "big decimal"). I don’t know one in particular to recommend, but there are several, just see exactly what you need and if the library supports it (in general at least basic arithmetic operations will be supported).
These libraries usually do this using not a variable, but an array of numbers to represent a larger number. That is, the supported accuracy is virtually limitless, except of course for the amount of memory on your computer (or the maximum number of elements in an array supported by the language/implementation). Of course, their performance is lower than the use of a single variable, so these libraries should only be used when really needed.
Updating: Although Javascript represents numbers as double
s, bit-a-bit operations (bitwise) as <<
, >>>
, &
, etc. treat their operands as 32-bit integers. For this reason, the largest number that can be dealt with by these operators is not 2^53
, but 2^31-1 = 2147483647
(the largest positive integer representable in 32 bits). Trying to use them with larger numbers will then cause unexpected behavior.
I modified the code to use simple multiplication/division by 2
- instead of operations bitwise, and it now works as expected:
var n = 562949953421312;
var power = 1;
var fatores = [];
while (n != 0) {
if (n % 2 !== 0) {
fatores[fatores.length] = power;
n--;
}
power *= 2;
n /= 2;
}
Example in jsFiddle (write a number in the text box and press Enter).
TL;DR: Javascript represents numbers as a 64-bit floating point, but bit-to-bit operations treat them as if they were 32-bit integers. See my answer below for more details.
– mgibsonbr