What is "word" of a CPU

Asked

Viewed 3,974 times

18

In my Operating Systems class the teacher quoted a term that left me a little confused, which is word of a CPU (Central Processing Unit) and he did not elaborate on the explanation of this term, said that it could have different sizes in relation to bits.

Question

I’d like to know what it is word and what relationship it has with the CPU?

2 answers

17


Initial definition

Word is a natural datum of an architecture (processor).

Just as in human natural language we have the letter as the smallest given, the syllable as the first grouping of the smallest given and then the word coming next in the quantities, in the computer we have the smallest bit as given, and the smallest grouping the byte (ok, it may not be so), and then we have the word. But in language words vary in size, currently all computer architectures have words with the same number of syllables (bytes) and since syllables are also fixed, we have the same number of letters (bits).

When we speak in word we are talking about a data that has a fixed size/length/bit width that that architecture works best.

In general we are talking about the size of the processor register. At least of the main registers. There may be other secondary ones for specific activities, such as floating point calculation, vectors, cryptography, etc.

Sizes

It can range from 1 bit (rare) to 512 (rare, may be larger in the future). The most common today is 64 size. 32 is also quite common. In small devices 16 or 8 still have space. Nothing prevents from having broken numbers, need not be only power 2, although it is the most common.

It is common, but not mandatory, that the word also determines the size of the maximum theoretical memory addressing. If the largest possible address has 32 bits it is better to have the processor register with a 32-bit word so that the pointer fits in the register and can be accessed quickly and easily. Older architectures and some very simple (embedded devices) may require more than one registrar to handle addresses. An architecture that needs accurate calculations can have a register larger than the largest possible address (e.g., 64 word and 32 address).

In general this is the size the processor works best with numbers. Eventually a smaller number may be so efficient, but there are cases where there is more consumption to make alignment. A larger number will need more than one registrar and it is more complicated for the processor to handle it, it is slower and usually loses the atomicity of the operation.

There are architectures that use the word as a measure for data transfers, but again, it’s just not a coincidence because it can simplify some operation.

Another point is that the size of the instruction tends to be the size of the word, at least on RISC architectures. This happened more in the past, today the instruction tends to be smaller, at least in architectures with large words.

Memory allocations usually occupy multiples the size of a word.

There are architectures that can have word size variations. On Intel x86, for example, started with 16 bits, then moved to 32 bits and now it is 64 bits that can have these 3 word sizes, called respectively WORD, DWORD, QWORD.

In the past a word tried to be equal to the character size, but that doesn’t make sense anymore.

Table of several known architectures.

  • One explanation better than the other

  • @Diegofarias I’m glad you like it, if you want to see everything I answered organized by votes: https://answall.com/users/101/bigown?tab=answers&sort=votes. Not everything will interest you, but there are some cool things. If you only want C#: https://answall.com/search?tab=votes&q=user%3a101%20%5bc%23%5d

  • So, I’m very interested in learning more, I saw that your posts are leading to more posts and references that have a lot of good content available. I am new in the area, but I intend to master the subject and do as you say in some posts, not buy MYTH. As my knowledge is increasing, I am realizing that some programmers I thought were Gods are not as I thought =). Thanks for the tip I will follow yes. I will try to keep my access to platform regularly.

  • There are no gods, all have flaws, even the best. Today our industry lives a lot of myths.

  • Yes, I believe even the best make mistakes, but far less than people who don’t know what they’re really doing. I saw that in the link that passed only C# have more 940 posts to follow. I take some time and accompanying them, these contents will be useful to learn more and mature the knowledge already exist. Thank you.

1

Processors respond to program commands (or, by extension, the programmer’s) through machine language, in the form of binary numbers, representing 0 = 0 Volts and 1 = 5 Volts, for example. This language is nothing more than the interpretation of a "table" of instructions where each instruction ("opcode") has a task to perform within the processor.

These "opcodes" or instructions are stored in program memory (ROM or RAM) and the processor reads, decodes and runs sequentially one by one.

The entire sequence of events occurring within the "chip" of the microprocessor, since the system’s energization, is controlled by the clock ("clock"), which sends pulses to the electronic components arranged in such a way as to constitute a complex state machine. Each 0 and 1 stored electronically in the program memory initializes and starts this state machine, giving instructions for the next state.

Typically several clock cycles are required to completely satisfy (or stabilize) the system, depending on the type of "instruction" with which it has been fed.

The amount of instructions desired by the system designer will determine the minimum number of bits (zeros and ones) needed to complete the set of these instructions. So with 1 bit we only have 2 possible states or instructions. With 2 bits, 4 instructions (00, 01, 10, 11). With 4 bits, 16 instructions, and so on.

That amount is the word processor.

But it means that with 64 bits more than 18,000,000,000,000,000 instructions are possible?

Yes, but to better understand why this word so big, let’s proceed...

The operation with each instruction is usually done in two steps: quest ("fetch"), where the instruction is transferred from memory to the instruction decoder circuit; and execution proper. See Instruction Cycle.

Taking as an example the 8085 8-bit microprocessor, the fastest instructions, usually of only one byte, are executed in four cycles of the clock ("clock"), the slowest, those in which the processor needs to search in memory two more bytes of data, up to 16 cycles. In all, this processor has 74 instructions and the clock reached a maximum of 5 Mhz.

As we can see, the old processors were ineffective with respect to the time of processing instructions. The highest performance can be achieved: by increasing the frequency of the clock ("clock"), having physical (electrical and magnetic) bus limitations (interconnections); by increase in the number of external bits, which also have limitations as to physical space; by reduction of the number of cycles to perform each instruction, what is currently done by chaining the instruction search cycles with decoding and/or using "cache" memory; by parallel execution of instructions or multiprocessing or finally increasing the number of bits processed internally, that is, the ALU (Unit of Logic and Arithmetic) the registers and the accumulator(s) with a larger capacity: 16 bits, 32 bits, 64 bits...

Reviewing the history of microprocessors, the first, 4004 Intel, had 4-bit word. The instructions were divided into two "Nibbles", that is, 4 bits or half a byte: the first was the "opcode", the second the modifier. Two more "Nibbles" could compose the address or larger instruction data. See the PDF manual of this chip at 4004 datasheet. Although it had instructions equivalent to an 8-bit processor, it could only perform calculations (it was designed for calculator!) directly with no more than 4 bits.

Nowadays the processors no longer decode the instructions only by means of physical devices of logic, but by microprograms, and they use the most advanced and complex architecture.

Within each "opcode" is embedded much more information than those old instructions. In addition, the processor, incidentally, each of the various processors is able to handle and perform calculations with numbers with much more digits and decimals, in the interests of greater efficiency.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.