Use of data type modifiers

Asked

Viewed 181 times

0

Interestingly, I was very interested to know why data-type modifiers are not so widely used. It is said that the modifiers are used to "efficiently" the performance of the program, since you specify exactly the type of information that will be coupled to that variable, however, I do not see them being used when they could in several applications.

There would be some specific reason for it?

If I were to create a helper variable of loop, which would not carry any important data, but rather implicitly helping in the program, why not use the modifiers?

Example:

unsigned short int ajudanteVariavel = 0;

    while (ajudanteVariavel < 3)
    {
        switch(ajudanteVariavel)
        {
            case 0:
                cout << "First print";
                break;
            case 1:
                cout << "Second print";
                break;
            case 2:
                cout << "Third print";
                break;
        }

        ajudanteVariavel++;
    }

While in reality, the int operator is used at almost all times:

int ajudanteVariavel = 0;

    while (ajudanteVariavel < 3)
    {
        switch(ajudanteVariavel)
        {
            case 0:
                cout << "First print";
                break;
            case 1:
                cout << "Second print";
                break;
            case 2:
                cout << "Third print";
                break;
        }

        ajudanteVariavel++;
    }

Would there be a specific reason why they are not used as much? Or is it my concept of application that is mistaken?

  • Did any of the answers solve your problem? Do you think you can accept one of them? If you haven’t already, see [tour] how to do this. You would help the community by identifying the best solution for you. You can only accept one of them, but you can vote for any question or answer you find useful on the entire site (if you have enough score).

3 answers

3

What gain do you expect from it? If you think you have any gain you need to justify it.

Normally this is used when you really need something with no signal, which is critical that it’s like this, which needs the 4 billion that it can get. Another use, questionable, is when clearly the numbers used cannot have negative.

Not everyone understands everything that has to be careful to use unmarked types (so much so that *warnings are generated in some cases, but not in all). Why use something different?

Using an unmarked type alone is not a big problem (but it is also). The problem begins to occur when you mix the signposts with the ones not signposted. And libraries require more use of the flagged type, so it will mix at some point. Before you start mixing you have to know how to make the conversions, you have to make sure you will always be attentive to avoid problems of the kind shown in this question. There is no shortage examples of problems.

Optimizing

Processors are optimized to work with int. The most obvious semantics with numbers is what the int offers.

Actually, if you think about it, it would make even more sense to use a char in the shown example (it goes up to 255 which is sufficient and occupies only 1 byte). What do you need a guy who can reach 2 billion if you need only 3 or a little more than that? Surely there are few because you are using in switch. On the other hand it fits the question again, which the gain?

In some architectures a char may be slower than a int.

There are those who think that will have gained of memory in using one char in place of a int, but it is common that a padding automatic to align memory and consumption ends up being the same.

In some cases there may be certain optimizations when the type is not flagged. If you need this you have to understand when it occurs and know when its use can be useful and will not cause other problems.

There is a lot of use of unmarked types

Bit operations (masks, sizes and some counters, data combinations, specific representations segmented as date/time, etc.) tend to work best with unmarked types. But most problems don’t have to deal with it.

In fact experienced programmers often prefer a int32_t in many situations since they have fixed size on all platforms and gives more control to what is doing. But it may have some minimal loss of performance. The int is to be the most performatic type of platform with a minimum guaranteed size (16 bits), but may be larger and on modern platforms usually is (usually 32 bits). But some style guides "forbid" the use of these types. There must be some reason.

In other cases they prefer size_t which is an unmarked type. Note that this type is widely used in real cases. If you look at most of the answers here about C or C++ it is used int because it works, but in real cases the uninsigned type (with modifier as the question calls) is even used yes, when it makes sense. Where you see:

for (int i = 0; i < array.length(); i++) {  
    //algo aqui
}

in professional codes, when it makes sense it’s usually written like this:

for (size_t i = 0; i < array.length(); i++) {
    //algo aqui
}

I put in the Github for future reference.

When you see codes of beginners or laymen, you may suspect. But if you see experienced programmers doing something one way they know it’s for the best. I’m not saying you shouldn’t question it, on the contrary, you need to understand why.

What I see a lot is naive programmer not understand all nuances of types and use the easiest and obvious without thinking about all the implications. This is what happens in most simple examples, in exercises. Producing real C/C++ code usually requires thinking a little more about types.

Completion

I believe (opinionated field) that many people, especially in exercises, use without thinking, not knowing that it can be different or just because it seems shorter to use the type without modifier. The others use it because they know it’s the best option in that situation.

One of the reasons maybe is lack of unanimity. Has who only sees problems.

Type documentation.

Lesser known extra types.

  • It’s not just a question of performance: Typing variables correctly can make code more readable and easier to maintain. Another interesting point is that much of the computational problems can be solved only with variables of the type int.

1

One reason is that it is not very practical because the use of more specific types ends up generating many warnings in the compilation. For example:

short x;
short y, z;

// normalmente aqui a compilação mostra warning, porque o resultado
// da soma y + z é um inteiro, que potencialmente pode ser maior
// que o maior valor de um short
x = y + z;

Moreover, in general memory gains are negligible, and in general there is no efficiency gain.

1

The difference of an unsigned type in summary, is the treatment to the first bit in the set of bits of storage of this variable in memory. on a 64 bit system would normally be this one 1 on the right:

10000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

That would be treated as a sign (usually 0=+ 1=-). This bit set supports storing 4294967295 possibilities, which can be interpreted in two main ways.

0 ~ 4294967294 (unsigned)

-2147483646 ~ 2147483647 (Signed)

Which in practice only matters your need for implementation. For the machine no matter.

Roughly speaking, this is it.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.