Difference in performance and use situation of numerical types

Asked

Viewed 1,259 times

7

I’d like to know the difference between the guys Long, Double, Float, Decimal and Int, taking into consideration when is the best use in real cases. Ex: "uses float in interest as it is...". And also the difference in performance between them. Ex: "int performs better than long because..."

I think it is very valid the question, taking into account that we often use wrong in our day to day, and for minimum difference of performance do, it is always interesting to think about it.

2 answers

8


No. Net (as in many other languages), there is a separation between integer types and rational types, so I will separate my explanation of these types into these two categories.

Whole Types

There are several integer types, again they can be separated into two groups, those that accept negative values, and those that do not accept negative values.

Accept Negatives (have signal): Int32 is the most common, but we have other: Int16, Int64 (there is no Long, but the alias long in C#, which is the same as Int64) and SByte.

│ Tipo  ║ Bits ║           Mínimo           ║            Máximo         ║ Alias C# ║ Literal │
╞═══════╬══════╬════════════════════════════╬═══════════════════════════╬══════════╬═════════╡
│ SByte ║  8   ║ -128                       ║ 127                       ║  sbyte   ║         │
│ Int16 ║  16  ║ -32768                     ║ 32767                     ║  short   ║         │
│ Int32 ║  32  ║ -2.147.483.648             ║ 2.147.483.647             ║  int     ║    0    │
│ Int64 ║  64  ║ -9.223.372.036.854.775.808 ║ 9.223.372.036.854.775.807 ║  long    ║    0L   │
└───────╨──────╨────────────────────────────╨───────────────────────────╨──────────╨─────────┘

Letters that appear in literals may be uppercase or lowercase: 0L is the same as 0l.

Accept positive values only (no signal):

│ Tipo   ║ Bits ║ Mínimo ║            Máximo          ║ Alias C# ║ Literal │
╞════════╬══════╬════════╬════════════════════════════╬══════════╬═════════╡
│ Byte   ║  8   ║   0    ║ 255                        ║  byte    ║         │
│ UInt16 ║  16  ║   0    ║ 65535                      ║  ushort  ║         │
│ UInt32 ║  32  ║   0    ║ 4.294.967.295              ║  uint    ║   0U    │
│ UInt64 ║  64  ║   0    ║ 18.446.744.073.709.551.615 ║  ulong   ║   0UL   │
└─────═──╨──────╨────────╨────────────────────────────╨──────────╨─────────┘

Letters that appear in literals may be uppercase or lowercase: 0UL is the same as 0ul.

In terms of the use of these types, there is no great difference in performance in the use of local variables or method parameters. Generally the type is used Int32 (int) for those uses, unless the expected values exceed the limits of int, in which case Int64 (long).

Already the Byte (byte) is only used even to work with binary data, never seen being used in codes other than for this purpose.

Other types are usually only used in data structures struct or class very long, or arrays, so that they occupy less memory... but that would only make sense even in widely used structures, or very large arrays, on the order of millions or even billions of items.

Besides all this, I see the types without signal (those that only accept positive) to be used in bit-to-bit operations (commonly called bitwise), by the ease of working with all the bits of the same, what is more difficult when there is the signal bit.

Rational Types

There are only 3 of those on . Net: Single, Double and Decimal.

│   Tipo   ║     Single    ║      Double               ║      Decimal                            │
╞══════════╬═══════════════╬═══════════════════════════╬═════════════════════════════════════════╡
│ Alias C# ║     float     ║      double               ║      decimal                            │
│ Mínimo   ║ -3.402823e+38 ║  -1.7976931348623157e+308 ║ -79.228.162.514.264.337.593.543.950.335 │
│ Máximo   ║  3.402823e+38 ║   1.7976931348623157e+308 ║  79.228.162.514.264.337.593.543.950.335 │
│ Literal  ║    0f         ║   0.0  ou  0d             ║     0m                                  │
│ Base Exp.║    2          ║    2                      ║     10                                  │
└──────────╨───────────────╨───────────────────────────╨─────────────────────────────────────────┘

Letters that appear in literals may be uppercase or lowercase: 0M is the same as 0m.

The basic types 2 (Single and Double), are operated on instructions from the processor itself, in a unit called FPU (floating-point Unit)... and which in current processors are so optimized, that the mathematical operations with floating points arrive to be as fast as with whole types.

The guys Single and Double are used when there is no match with decimal numbers in a fraction. Examples: calculations involving the physics, used in engineering or simulations made in games, use these types.

In terms of performance the types Single and Double are the same on machines current, as FPU converts both internally to 80-bits. So the only advantage real in using Single is in terms of memory usage.

The guy Decimal exists to support operations that must be matched with decimal fractions of the real world, with when working with values monetary... including, I think M from the literal comes money (but this I am speculating).

The guy Decimal has 128 bits, of which 96 are used to represent the internal value called mantissa, and the others are used to indicate a base divisor 10... is practically an exponent, the same as exists for base 2, only at base 10 and only negative. Therefore, the Decimal is not able to represent numbers as big as the Single and the Double (because these two accept positive exponents). On the other hand, the type Decimal has an absurdly greater precision.

In terms of performance, the type Decimal is very bad compared to the guys base-2, as all mathematical operations are performed in the ALU (Arithmetic Logic Unit), and are therefore subdivided into various calculation sticks regardless of the operation being made.

2

Basically accuracy is the big difference between them:

Float: 7 digits (32 bit)

Double: 15-16 digits (64 bit)

Decimal: 28-29 Digits (128 bit)

Decimal (decimal) is much more accurate than any other, used almost in all financial applications that require a high degree of precision. Against start the decimals (decimal) are much slower (reaching 20x) than double/float.

  • 2

    I believe that Decimal be more accurate not only for the extra bits, but because it is not implemented as floating point, which by definition has problems to represent certain numbers. Probably also why Decimal is slower.

  • This is an interesting reference to see the difference between decimal and float: http://gregs-blog.com/2007/12/10/dot-net-decimal-type-vs-float-type/

  • 1

    @bfavaretto: the Decimal type in . Net is floating point, only the exponent is on a decimal basis instead of binary - see: Decimal Structure.

  • @Interesting Miguelangelo, I didn’t know it! Actually, I know very little of the . Net, so I used "believe" and "probably"... Thank you for the information.

  • @bfavaretto: Have... my intention was to inform even, not to criticize or anything. Even though you said "I believe", I imagined that some people might take it as fact. = D

  • Quiet, @Miguel. I didn’t think you were criticizing, and I’m always happy to learn something new :) In fact what you added will be useful to many people.

Show 1 more comment

Browser other questions tagged

You are not signed in. Login or sign up in order to post.