8
In most programming languages, when you want to create a bit mask you usually use an integer type and operations bitwise (and, or, xor, not, shift left, shift right...). However, although nothing prevents the programmer from assigning a specific value (say, 6
: 110
) for the mask, constants are usually created to represent each bit and insist - as good practice, and to avoid problems of incompatibility in the future - on using these constants, avoiding the "magic values". This is not usually enforced, however.
There would be some harm in creating an abstract "bit mask" type whose subtypes were particular applications of this technique, and make the compiler force the use of this type? For example, some languages that support enumerations (enums) - like Java - allows you to create methods whose parameters have to be of this type, so that the programmer has no choice but to use its members, even when each of them has one or more [unique] values associated. And an enumeration may or may not be used to implement bit masks, but it also has other purposes[1].
My question is, specifically: is there any use case for bit masks in which the freedom to use integers instead of defined constants brings a significant advantage, and its loss can compromise the expressiveness of the code? I think this is something that only those who have experience working with bit masks can answer, but if someone has some external reference dealing with the subject would also be quite useful. In my limited experience, the main cases of using a bit mask are:
- Set multiple bits or just a particular bit (or clear a particular bit);
- Check whether a particular bit (or bit set) is set;
- Serialize/deserialize (i.e. save the data structure that contains the bit mask in a file or other binary/textual format).
I can’t think of any other.
[1]: By the way, contrary to the premise of that related question, I have good reasons to want to change the bit mask throughout the evolution of products, both their individual values and their set of elements - but always versioning, so as not to break old code. This restricts my particular case, but does not preclude the question (for I remain interested in knowing what is lost when using a specific type for bit mask instead of a "generic integer").
If you have a good reason, you know what you are doing, everything is valid... :) I hope you get very good answers since the question is.
– Maniero
I believe that there is only one aspect to the need for typing, to improve the maintainability and readability of the code, because, in computational terms, what matters to the processor is the value of the byte itself, not the name of the variable, etc. The advantage/disadvantage would be only in terms of code maintenance and ease of the API. You could sometimes use Tellisense, make it easier to test, things like that, but I don’t know if it’s a big advantage. If it doesn’t generate a lot of maintenance on that part, good documentation would supply the need for a type
– EProgrammerNotFound
The C# language allows you to combine Enums by the |operator. That is, it can be typed. Now if it should?? I don’t know how to answer.
– EProgrammerNotFound
This article on msdn, can give you an idea of the arguments that engineers used to add this resource to the language: link
– EProgrammerNotFound