The first programming language

Asked

Viewed 3,637 times

46

Until the time when computers were purely mechanics and were programmed by punch cards I understand how it works. Later, when the first "digital" computers operated with valves and relays appeared, it becomes more difficult to understand how the programs were made. I imagine there should be something similar to a compartment with a relay array (similar to punched cards) pre-programmed, which contained a fixed and unique program to be run by the machine. When it was necessary to change the program, the programmers only rearranged the valves and the machine continued to work.

How did this notion that we have of programming today that the programmer types code in a text editor and compiles. I can’t imagine how it came to this. For example Assembly, it is only translated from texts to binary by Assembler, but for that, it would take another programming language to create Assembler.

Suppose there are only computers controlled by relays and valves. How could anyone write an Assembly code when there was no text editor or anything.

The result of a montage is another file in text format with 0’s and 1’s only? Or are electrical impulses sent to the valve machine.

  • I didn’t quite understand your doubt, you need to be more specific to it. I would suggest you start with generations of computers, for vc etender how they work, detail, the concept of digital computer (binary code) arose with the transistor, that was when the machine code that is the binary language and the basis of computing digital.

  • 1

    I understood perfectly, it is an interesting question, but not for the format of the site, will end up generating very broad answers. I still went to read the answer to see if it could save, and honestly I think not. I gave +1 but I put to close.

  • 3

    Maybe learning a little bit of digital electronics will help. All the developments on the software side were ways to make life easier just.

  • 2

    @Denercarvalho actually transistors came well after

  • 2

    I think it will depend on what you would accept as a language. Example, a switch is a language. It just says ON/OFF. Look at the walls of your house, there are several of them and you use them every day to turn on and off lamps. Now, try to observe from another angle. Think about how you could improve this mechanism so simple. Most would say that you have nothing to improve or that you have nothing to do. A few would say "I think I can add a gear or slider structure to slide/rotate under a resistor circuit and thus control energy levels".

2 answers

41


TL;DR

Some statements in the question use contemporary premises. You have to abandon all this to understand the functioning of the "old" computers. And I doubt that anything succinct will give a good idea of how computers worked. In fact, this will only happen more concretely with a lot of study. It’s a lot of variation.

0s and 1s can be stored and processed in various ways. Their input and output as well.

it would take another programming language to create the Assembler

You don’t have to. You can do it by hand. It’s not a pleasant task for a human being, but it was what you had. And that, I think, deciphers a lot of the doubt.

In the premise of the question there was no first programming language on computers. It only existed on paper. Someone would write what the program was supposed to run, turn it right there into numerical code and then convert it to binary, all in hand, just like the computer does, except the conversion to binary because internally everything is already binary anyway. Internally the language was binary code.

So the Assembly, or a rudimentary form of it, existed only on paper. The assembler was human. The automated assembler emerged later, since it made no sense not to use the computer to automate a lot of problem, but not to automate its own problem.

High-level languages were born that way too. They got tired of writing something understandable and then also writing Assembly to do that.

the result of a montage is another file in text format with 0’s and 1’s only?

In a way we can say yes, but it is not a precise definition.

Or are electrical impulses sent to the valve machine.

Everything in computers is electrical pulses. But it’s not easy for a human being to generate these pulses on his own, so an electromechanical device is used to generate it. Keys are used to generate this. Until today we use keys to enter this data. In English key is key, that we call the key. Then a key panel is a Keyboard, which we call the keyboard. These keys are organized in an easier way for us to use than were the keys in the early days.

Binary codes were entered by mechanical on and off keys. Yes, bit by bit.

Painel

Introducing

Digital computers, i.e., electronics based on 0 and 1 were created in the 30s/40s. More or less together with the programmable computers, that is, it was a universal Turing machine. At least that’s what the story tells. There are always those who can say that it happened a little different from the official story.

Z3

Initially all information was stored in one rudimentary form of memory (another example and the most used for a long time). In fact, each computer tried a new way a little different or a very different way, until a form came to dominate. But the memory as we know it today only appeared in the 1970s.

There is dispute over which was the first computer. For a long time the ENIAC was officially considered. Even if it is not true, it is important to understand evolution. The wikipedia article has many details that help you understand how it works.

The problem with the question is that it seems there was a unique event that structured everything. Actually there was a lot of experimentation, not all of them became so famous. Several paths were taken.

The Mark I (one of the variations) was an important computer created by John Von Newmann, credited as the creator of basic computer architecture "modern". Another variation evolution is also important. If you want details follow the links. It’s hard to understand everything without getting all the details.

There’s a list of computers that started the story and which, as a rule, were hand-programmed.

Forget this idea of files, of texts. It was binary. Think concretely.

How it worked

There was even some point where someone "programmed" the computer by rearranging valves. But this cannot be considered programming as we know it today. It quickly became apparent that a more volatile form was needed. It needed the program to be entered in some way (and could even be punched card), stored and followed as dictated to alan turing theory.

You should imagine that there wasn’t much room for error. The programmer had to make sure that his code was error-free. Everyone needed to be checked in the eye. Of course, at that time there were only good people working with it, they were careful because they knew they could not err, and they dominated the whole process, after all they had invented everything, and, of course, only made things very simple, to the current standards.

Entrada de dados

Data entry has nothing to do with programming itself. Just as today, programs are given for a special purpose. So if you were entering numbers to calculate or numbers that would instruct the computer on what to do, it didn’t matter, just as today if you’re using the keyboard to send a message or to create a program.

One of the problems of fiddling with the keys is that the moment you were going to put in another program, the current one was obviously lost. There they started using punched cards for the programs (or paper ribbons). The cards were punched according to the bits that had previously been programmed on the paper. Then it was never lost (except by deterioration of the material, but it could be reproduced).

Cartão com código binário

Of course, the card was only used as a means of entry. It modified keys as they had holes or not. The specific mechanical process changed but the basic concept and operation was the same.

Note that this doesn’t matter much. The abstractions created are what allowed programmers to productivity. That is why higher level languages were created. They were conveniences for the human.

It also needed productivity in data entry. The gain is not so great, but important. Just as they had keyboards to punch the cards, at least in more modern versions of the punchers, they realized they could make the keyboard already change the programming keys. Of course they were no longer mechanical, at least not purely.

Punched cards were used in mechanical computers for pure data entry, not programs that did not exist.

Entrada moderna

The image below starts to show the computer as we know it today. That was the middle of the 1960s. There was the keyboard input, the output by printer or video, storage by magnetic means (which is already becoming obsolete), still sequential, although perhaps it has a random medium somewhere not visible. I had little because it was too expensive.

Computador "moderno"

See how what is considered the first micro-computer in 1975 was so simple that it functioned like the first modern computers of the 1940s. He could have an optional keyboard, I don’t know if in the first version.

Altair

Look at his simulator. Try to program.

The first language

In the sense of the question I believe that the binary code is the first programming language. Although it’s highly likely that there was something more abstract before.

Código binário

I have no evidence, but it is possible that at least one form of assembly was used as the first language of these modern computers, but not used on computers, only on paper. If you’re going to consider what was done on paper to build computational machines, then you have to see what the Ada Lovelace used, the Note G, as shown below.

Note G

High-level languages were initially created to give programmers more cognitive power. But it was also a way to not depend on the specific instructions of a computer and be understood by any computer, given a transformation, manual or not (at that time already used assemblers and the creation of a compiler was the natural step).

Completion

Everything is evolution. You have to take it one step at a time, and there are many. You have to create the egg first and then the chicken. Or would it be the other way around? Fortunately this dilemma does not exist in computing, first came binary (which is already an abstract concept), then numerical code (which is a higher level), later the text with Assembly and only in the 50’s appeared the languages more abstract and close to what we humans understand.

To show better I will have to go deeper, I would have to show each step. It is too long. The links already help search more. You can’t be too lazy to click on them all here, or on Wikipedia. Yeah, it gets old, but it’s the only way to learn everything and understand it all.

There’s a question about the functioning of modern processors. It is not so different from the beginning, although they had "several beginnings".

Also useful: How a programming language is developed?

  • Bigown and Onosendai thank you very much for the answers, have clarified a lot. The two answers were great and unfortunately you can only mark one as accepted. I marked the @bigown because I found it more enlightening, but both were great. Just doing a recap to see if I got it. What really evolved was the input of data, wasn’t it? Assembly was (initially) just to make it easier to write programs. The data entry was still done manually. Only after the keyboards appeared, that the computer started to be programmed directly, right?

  • 3

    @Michaelpacheco The data entry ( the program) is done manually until today, only that the data is at a level closer to what the human being understands. The difference is that the bits were initially entered instead of characters and the keys were more rudimentary. I even say that we use keys until today, but it is a key (key) easier to use and that already considers information more complete than a bit. These are two different things that have evolved: the input of data, although little has changed; and the abstraction of the code, which I think is what I wanted to know, was binary->mnemonic->Assembly

  • Two weeks ago I had to put together a presentation on programming languages. It would be great if I had that answer at hand at that time, it would have been very useful to me. + 1

29

How could anyone write an Assembly code when there was no text editor or anything?

The answer is a bit long, but it will be worth it. Let’s take the opportunity to clear some concepts:

Any and all value in an electronic computer, whether valve or transistorized, boils down to the presence of signal (usually 1 or true) or absence (0 or false).

When it was necessary to change the program, the programmers only rearranged the valves and the machine continued to work [...]

In fact valves worked like the transistors that succeeded it: Their state could be altered by signals.

Programs, even in the most modern computers, are nothing more than values expressed in binary format that instruct the processor as other values (data) according to predetermined instructions.


Of the article on Wikipedia programming in English (free translation):

[...] programs [for the first generation of computers] had to be meticulously informed using the instructions (elementary operations) of the particular machine, often in binary notation.

Each computer model used different instructions (machine language) to perform the same task.

Computers like ENIAC used function tables - literally large bit maps - to insert the instructions:

inserir a descrição da imagem aqui

Cpl. Irwin Goldstein (foreground) sets the switches on one of ENIAC’s Function Tables at the Moore School of Electrical Engineering. (U.S. Army photo)

The operator then programmed the computer to turn hundreds of physical switches on and off in these panels.

Cards succeed the function tables using the same mechanism: Each card position corresponded to a true/false value.

Later, the assembly languages (Assembly) were developed, allowing the programmer to specify each instruction in a text format, informing abbreviations for each operating code instead of a binary number, specifying addresses symbolically (e.g., ADD X, TOTAL).

The first version of Assembly was not really text, just a set of mnemonics of a character that aggregated several instructions. Used by EDSAC, It was called Initial Orders.

These function libraries have been expanding, with several mnemonics covering more and more sets of operations.

The first computer with an assembly instruction set was the IBM 650. Using valves and punched cards, it contained a set of 650 instructions - a 2-digit combination for operation, 4 for the memory address and 4 for the next instruction. Note the similarity with modern Assembly languages:

#      op|data|next
         |addr|instruction

0001 - 00 0001 0000  
0002 - 00 0000 0000
0003 - 10 0001 8003
0004 - 61 0008 0007
0005 - 24 0000 8003
0006 - 01 0000 8000
0007 - 69 0006 0005
0008 - 20 1999 0003

Read the mnemonic format, they were like this:

0004  RSU  61 0008 0007  Reseta o acumulador, subtrai do superior (8003) o valor 2019990003
0007  LD   69 0006 0005  Carrega valor 0100008000 no distribuidor
0005  STD  24 0000 8003  Armazena distribuidor no endereço 0000: Próxima instrução na
                         posição 8003

Introducing a program in Assembly language is usually more convenient, faster and less prone to human errors than direct use of machine language, but because a language Assembly is little more than a different notation for a machine language, any two machines with different instruction sets also have different languages Assembly.

The answer to your question then would be: Initially directly in memory, followed by bit copies of an external media (cards, for example).


The steps described so far explain how computers have become sophisticated enough to process command sets. We now come to your second question:

[...] The result of a montage is another text file with 0s and 1s only? Or are electrical impulses sent to the valve machine.

Everything in a computer is binary signals via electrical impulses, or representations of these states. Even a text file, where letters and symbols are represented by... binary codes.

In the beginning monitors were rows of luminescent valves used to display the state of a particular bit within the computer. However, as the technology that supported televisions of the time used valves to display images, soon the technology was migrated to the universe of computers: Thus was born the VDU (video display terminal); the first commercial model was the Datapoint 3300, launched in 1967.

Another technology was absorbed from the teletypes and typewriters of the time, the keyboard. ENIAC was the first computer to make use of one, both for data input and output, and printer.

The marriage of the two technologies - where mnemonic instructions could be viewed on a monitor, and a keyboard that allowed agile input of instructions - allowed the advancement of device programming methods.

Remember which text files are actually binary code sequences? One of the first widely used codes was the ASCII, created in 1960 and derived from the telegraphic code.

inserir a descrição da imagem aqui

ASCII Chart from a 1972 Printer manual (B1 is the least significant bit).

In this format the letter A is represented by 65, B by 66 and so on.

A first program, written in machine language (most likely using operators' mnemonics), was created to allow keyboard data to be allocated in a memory space, in the format of source code. Let’s take an example in C:

 volatile int x, y;
 int
 main ()
 {
   x = foo (y);
   return 0;
 }

This program copied, character by character, the keyboard input to the memory - without interpreting its content, just displaying it on the screen. For example, the word 'volatile' would be stored like this:

v 01110110 
o 01101111 
l 01101100 
a 01100001 
t 01110100 
i 01101001 
l 01101100 
e 01100101

Extremely inefficient, right? Another part of the program read these addresses, compiling them, and transformed them into the mnemonics used by the processor.

Our example in C would look like this:

 5  {
 6    x = foo (y);
    0x0000000000400400 <+0>:    mov    0x200c2e(%rip),%eax # 0x601034 <y>
    0x0000000000400417 <+23>:   mov    %eax,0x200c13(%rip) # 0x601030 <x>
 7    return 0;
 8  }
    0x000000000040041d <+29>:   xor    %eax,%eax
    0x000000000040041f <+31>:   retq
    0x0000000000400420 <+32>:   add    %eax,%eax
    0x0000000000400422 <+34>:   jmp    0x400417 <main+23>

The 0xN representation is the representation of mnemonic operations (such as mov, xor and retq) the way they are seen by the processor.

The operator then moved the execution pointer to the compiled result, running the binary code.

The rest, as they say, is history.

  • 2

    Onosendai I think that the most important part, the last, should be deeper.

  • @Jorgeb. How about?

  • @Onosendai thank you so much for the answer! I thank you for your dedication, I imagine that it must have been very difficult to organize so much information.

  • @Michaelpacheco not so it is always a pleasure to help. =)

Browser other questions tagged

You are not signed in. Login or sign up in order to post.