How does a computer understand binary code?

Asked

Viewed 3,546 times

53

How does a computer understand binary code? How binary code was created and who created it?

  • 10

    I liked the question she very interesting, but I think it will be closed because it is too wide, unless there is an answer that clarifies a little, in which case I would like to see an answer + 1 ;)

2 answers

54


Concrete part

As the name says the binary is only a sequence of bits, ie some state indicator on or off. At the time of its execution are only electrical pulses of low and high voltage (high is way of saying, actually it is quite low, only is a little higher, and low is almost zero).

All this goes through gates logic (are electronic equivalents of relays) that do some operation, in general:

  • reversing the 0 or 1 signal (not),
  • maintaining the state (buffer),
  • resulting in 1 when at least one of two signals is a (or),
  • or in 1 if the two signals are 1 (and),
  • in addition to some variations of these.

As the bits go by, they excite these "relays" according to the logic gates and determine which way these and other bits go. Decisions in the processor are the doors opening or closing according to the electrical pulses (data, information) going through the doors.

All the processor knows how to do is this. A set of interconnected ports in specific order will produce specific actions.

Think of the doors as very simple (but very concrete) instructions that put in a certain order form a program, an algorithm, to do the most basic things (an addition, which is not even the simplest of programming, but the simplest of you understand), even some complex good on modern or more specific processors (for example, encryption or vectorization). This varies according to the architecture (see below).

The processor has a instruction set.

Bits go through those doors once through Hertz. In a 3.0Ghz processor they pass 3 billion times per second (a bulb usually "flashes" 60 times per second). Some instructions need to be passed several times to fully execute. Even an addition cannot be performed in a single pass (cycle).

A processor is a huge state machine.

Modern processors have billions of transistors (gates logic).

Example door scheme (obviously this is a representation for a human to better understand):

ALU gates

Abstract part

So every processor in the background has "programs" inside them made with electrical mechanisms that in some cases are called microcode. Part of what’s there is to control how these programs should work. One part is to make a memory content go into it, understand what it’s about and do something. Some content is such binary code. Then some bits enter a specific place and trigger a part of the processor that must do something.

There was a time when the hardware was programmed, that is, organizing doors or sets of doors ready to do what was desired. It happened in the '40s when modern computing started, and it happened a lot until the '70s, now it doesn’t happen much, but it still happens in niches.

The creation of a processor, simply, is a mixture of finding and manipulating the right materials to give the desired properties (speed, dissipation, etc.) and create "programs" within them that will run the most basic things.

Some bits indicate that the processor should pass the control to the "internal program" that makes an addition, for example. The following bits are the values that will be used to sum (simply some ports or with a recess to take care of the "go one") and the result is placed in one area of the processor for another instruction to do some other operation, possibly send to memory.

Some architectures have instructions of variable size (from 4 to 120 bits in Intel, for example) typically in CISC, in others the size of the instruction is always the same no matter what it is (word size, 32 or 64 bits, in ARM), typically in RISC.

Simply put, the first has the advantage of saving space, the second has the advantage of performance and efficiency, although there are techniques to compensate for one or the other, so it is more complex and today one helps the other improve.

Now, how does he take these bits and know what to do? There’s "programming" controlling it. That’s the instruction cycle (loads, decodes and runs). Behold a little more detailed explanation. From Wikipedia (typical in CISC, in RISC the instructions have a simpler cycle):

  1. Calculation of the memory address containing the instruction
  2. Pursuit of instruction
  3. Decoding of the instruction
  4. Calculation of the address of operands
  5. Search of the operand (Operand Fetch)
  6. Execution of the operation
  7. Storage of the result in a memory address or register

A number of specific controls are used to make it all work, typically: PC, SEA, MDR, GO, ASS, ALU, MMU, just to stay in the basics. Modern and general-purpose processors have parts that do much more than that to optimize operations and provide other functionalities.

In more general processors there are some more specific parts to help the operating system function, such as process control, memory, interruptions (signaling), protection, etc. Some more specific to certain types of application, especially in more modern and powerful processors, one example is virtualization.

Algoritmo de execução de uma instrução

I stay here because the question is a little wide and almost out of scope (but far from being able to be closed).

Obviously I made some simplifications, don’t take everything literally and I didn’t mention the processor’s communication with memory. You can ask more specific questions for the developer to understand what they’re doing.

In fact, it’s interesting to see how a virtual language machine works: CLR, JVM, Parrot, Nekovm and specific languages such as Moon, PHP, Pyhton, Harbour, etc., because they simulate a processor throughout this abstract part (in the ones interpreted in a huge switch-case :) ).

Further reading:

Want to play binary programming?

  • 6

    I was hoping you’d say :D

  • 1

    I don’t quite understand it, but I’d like to ask you: this phrase is right "resulting in 1 when at least two signs are one (or) or in 1 if the two signs are 1 (and)"?

  • 1

    @jbueno made to see if you were attentive :D Thank you.

  • 3

    Great answer. Summarized processor architecture in a few lines.

  • "There was a time when the hardware was programmed, that is, organizing doors or sets of doors ready to do what was desired", I remembered the Fortran, where programming was done with punched cards.

  • @Julioborges Actually there is no such direct relation. I programmed in Fortran long ago without punched card. And I’ve seen many languages using punched card. Anyway what I said is lower still, I speak of chips or transistors (or valves).

  • @bigown, Because eh, it’s amazing how technology evolves constantly, I’m amazed by all this.

  • In fact, it’s just an illustration for those who have never seen a logic gate scheme, with no intention of analyzing what it does. It is not the intention of the answer to deepen the functioning of the hardware, the first part is only to not leave the person in the vacuum. You were interested in this because you already know it, but the intention was to focus on decoding the instruction and its execution. But I think I’m still going to improve it, I didn’t expect it would have so much acceptance.

  • I really liked the answer, but I had a question about the last paragraph. Where you say "they simulate a processor everywhere abstract (in the ones interpreted in a huge switch-case)". How does this work? Is there any question about it here at Sopt?

  • @Francisco that I know has nothing on the site no, I do not know if I understand what you want to know that works, but however long it should be to comment here.

Show 5 more comments

13

Supplementing the subject,

In a general summary, for those who are floating, the binary code logic is
0 -> off
1 -> connected.





Modern computing

The first "modern computers" organized data in "hole cards". Usually those who attended basic computer classes learn about it. The hole cards were created in 1832 and later in the early 19th century was enhanced by a company we now know as IBM.

In 1946 the memory cells. Composed by integrated circuits, formed by transistors and capacitors. This originated the RAM (DRAM, SRAM) and several other devices as the Flip-flop.

inserir a descrição da imagem aqui

Origins

The creation of modern binary code is attributed to Gottfried Leibniz.

His inspiration was the I CHING Chinese. All code creation is based on the diagrams of the same.

inserir a descrição da imagem aqui

Browser other questions tagged

You are not signed in. Login or sign up in order to post.