My question is how does the compiler know I’m working in binary? Is there any method that can put the number so the compiler knows that it really is a binary number?
The compiler does not know, even there is nothing to indicate this in the code.
The most important concept that needs to be learned here that this idea of binary decimal or other notation is something that serves the human being. For the computer none of these abstractions exist. In the background everything for it is binary, everything else is a way for us to understand better.
When we write in decimal in the code we are only using a representation that is intuitive to us. In code the number is a text, until it is compiled. When you have a number printed on the screen or other place you are having printed a text that represents what number it has in memory. The way to organize the number in memory is already binary.
So what you see on the screen is not the number, it’s just a text, so neither the compiler, nor the computer, nothing in the process knows what this text is about, but shown to a human he knows.
So the function doesn’t convert decimal to binary, it just seems to be doing this. It takes a number and it exists by itself (on most architectures it will have 64 bits to keep its value in memory) and it is not as decimal as it seems. There are some calculations to take the individual bits (in a rather inefficient way), and the rest of each cycle of the algorithm will result in a number 0 or number 1, that is to say a reduction. This number will probably be stored in 32 bits, although only 1 would be enough.
At the end it has each of these numbers stored individually printed on array. There’s nothing binary about it. There’s an illusion that there’s a binary notation going on, but only a few characters (yes, they’re characters) 0
and 1
always printed one after the other.
I have this function that converts decimal to binary, but then how do I sum bits, or use & (and), etc.?
If you want to operate on the number you don’t have to do anything, operate on them the way you need to, you don’t have to convert anything. Bit operators operate on any number because they are all bit mounted.
Use operator & we need to do 2 decimal? Ex:
25 & 25
Or we can do
11001 & 11001
You operate in numbers, if by chance your code counts is written with a text using the form we know decimal is irrelevant. But if you use binary you need to use binary notation in the code text. What you’re doing there is applying the and on number eleven thousand and one with himself, which he will obviously give himself and is usually unnecessary to do. If you want the binary notation in the literal written in the code you have to be:
0b11001
That you know as 25.
Probably wants to do it in the end:
int num = 0b101;
I put in the Github for future reference.
If you want someone to type zeros and ones and he understands have to do the reverse operation, you have to validate if it is one of these two characters typed (which can be converted to numbers automatically in some cases) and go adding the exponentiations to get to the desired number, which can be simplified with the shift (<<
). Again we are talking about the difference in representation for humans and how it is represented internally for the computer.
And that’s what I understood and I was able to answer.
It would be more interesting to act as
char num= 0b101;
? this way used only 1 byte, ie 8 bits. But to have a good margin of numbers it is better to use theint num=0b101;
for 4 bytes, that is, 16 bits?– Fábio Morais
In the case of such a small number could even, a
char
is a 1 byte numeric type. Theint
probably have 32 bits, but that depends on where it’s running.– Maniero
Thank you very much, I am not going to vote now on an answer to try to get more answers, to get more information from different points of view. Thank you
– Fábio Morais