Well, my second sign rejected (of two) :-)
I understand that this site has different rules of the OS - and I would like to take this opportunity to say that this site is my favorite, by a long margin, among the SE sites I visited (which are more than can be seen in my profile) - and it might not be so good to comment on a peripheral position. That’s a matter of opinion too :-)
Okay, how do we solve this question? First, take the four age languages that were mentioned, and identify why they used "1" as a starting point (at the same time, establish which, if any, has "arrays" in the sense that they are understood by many).
So take a handful of languages, preferably popular, and find out why they use zero. Find the mother tongue, where there must be an obvious connection.
Find out if any of the two sets of answers indicate a "grouping" of reason that in turn supports the idea of a specific reason for this trait.
Also find out for Moon and Mathematica (this may be the easy part) why they use a.
Join all results. Consider. Speculate. Theorize. Publish.
Work for a Computer Historian who specializes in Language Development.
Dijkstra
A man of opinions, all correct from his own point of view. For to him his knowledge is absolute, and his opinions are facts.
A man of opinions, Every one of them correct to his Own Absolute Certain Knowledge.
Some quotes from notes quoted in that reply.
I mention this experimental Evidence - for what it is worth - because
some people Feel uncomfortable with conclusions that have not been
Confirmed in Practice.
Many Programming Languages have been Designed without due Attention to
this Detail [that we had Better Regard zero as a Most natural number].
[a Mathematical colleague of mine] accused a number of Younger
computing Scientists of "pedantry" because - as they do by Habit -
they Started numbering at zero. He Took consciously Adopting the Most
sensible Convention as a provocation.
Dijkstra writes very well. It is a pleasure to read his texts, not only for the content but also for the hints of good humor.
But to say that starting at zero is the only way to do things, and that any other option would be neither natural nor rigorous is... a statement, and assumes that everything in society (or at work), especially computing, must occur through rigor, and lead to some natural conclusion that cannot be disputed.
Things don’t work that way.
I think Dijkstra would love to get in an elevator and see a button Piso 0
. He might refuse to enter an elevator he used T
to the ground floor or 1
as the top floor before the garage.
In fact, I like this idea. Imagine the Dijkstra pressing zero and going to the garage. It goes back to the elevator and presses zero again. Garage again. He would write well about it, saying that the problem is the design of the building. OK. That’s just my opinion and my imagination.
Open a file drawer. See the folders inside. If someone says "first", "last", "third from the bottom", "the one next to the red one" or something like that, we can find what we’re looking for, whether we count from scratch or from one. We may even need to look at the contents of two folders to find, but it will work if the information you gave us is correct.
It matters?
The language processor handles everything anyway. No matter what we write (as long as it’s appropriate for that language), the language processor will do what it has to do. In many examples there is no longer a meaning even in itself. An associative array doesn’t start with zero, or one. Today there are many "loop constructs" that do not have to specify an initial value, because when you think about it, it is reasonable to expect from a higher level language that "I would like to query all the values of this thing, compiler please start at the right place" function as expected.
No, I’m not spending any more effort on that question than she’s worth, I’m sorry. I marked it as CW, since these are some reactions to opinions, in an opinionated question, which can only be answered with uncertainty, I certainly don’t want any reputation for it, even if someone deems it worthy.
It’s a question for a bar on the beach, for when you’re stuck in an elevator with some programmers, or for your dreams, when you win the debate and the whole (programming) world agrees with you.
Initial Position 1 = Displacement 0
COBOL, that language that has been there for a while and that perhaps few understand, does not have "arrays", but it has "tables" with OCCURS
.
01 A-TABLE.
05 FILLER OCCURS 100 TIMES
INDEXED BY AN-INDEX-NAME.
10 SOME-DATA PIC XX.
To reference an entry in the table by subscribing, you can use the date-item index AN-INDEX-NAME or use a literal or use a data-name (integers only, but we don’t have "ints" that way).
The fastest way to access an element is with a literal, because the compiler calculates the displacement. Literal 1 gains displacement 0. Literal 2 gains ((2-1) * element size) etc.
The data-item index you assign (SET) to a value. You make SET ... TO 1 and the compiler uses a zero value. You use SET ... TO 2 and the compiler uses ((2-1) * element size) etc. You can do SET ... UP or SET ... DOWN and the compiler adds or subtracts the element size appropriately.
Using a data-item index is the second fastest way. Most of the time.
Using a data-item as a subscript, the compiler needs to generate code to "convert" the value to an offset. A little more work. Always a date-name to subscribe with a value of 1 will be calculated for a displacement of 0.
Except. For special case, where the size of the element is a.
In this case, the compiler will use the data-name value directly, it will just "pretend" that the table starts a byte earlier. In this special case, the date-name-for-a-subscript is faster that using the date-item index.
We humans always use 1 for the first entry of a table. The compiler uses 0, or modifies its implementation of 0 in the special case of a single-byte table and uses 1 for the first entry.
Now, the situation is, with COBOL, our tables have fixed size. We need to know how many entries there are in the table, which are actually used. The compiler cannot tell us. If we have to know how many there are, we have to know that it is 0 when there is no input. If we had to subscribe from scratch, we would have an interesting test to see if we exceeded the number of entries in a table.
My guess would be that the above is not the reason why one’s "starts" subscription in design, but that design decided, because the purpose of the language was to make it easier for humans to program, that starting with one was an easier concept to get the hang of (at that stage in the history of computing).
To get a basis for one or the other point, you would probably have to take a look at the FLOWMATIC language, where much of the design was used for COBOL.
Maybe Alvezquemsabe was neither a conscious decision but something so "obvious" that it was not even much discussed.
However, it is unlikely that anyone alive knows the answer to COBOL. Or FORTRAN. Or ALGOL. SMALLTALK is a little younger. Perhaps an answer can be found by looking at all the design notes of COBOL. Maybe not.
It’s just a question of speculation and opinion, and the way I see it, there’s no "right" or "wrong" answer to why those languages of that time did it this way.
Dijkstra, who seems to think everyone <ol><li>
should start at 0, certainly did not know.
Reply freely translated from the original in English, in the revision history.
Perhaps, and this is my opinion ( flagged issue ) which was originally thought more important than a subscript meant something analogous to the human reader. Putting something into the zero slot when explaining it is less easy to understand than putting something into a slot. Compiler / interpreter does some work of "translate" but it is not big problem.
– Bill Woodger
I could almost understand :D
– Maniero
@Bigown For the fastest, I use English. With less importance of time, my own weak Portuguese. For answers, a mixture of time, Google Translate, and my son’s help. This, only weak method :-)
– Bill Woodger
@Billwoodger do not worry, there are times when your text becomes much easier to understand than those of some of our "natives". don’t Worry, sometimes your text is easier to understand than some of our "Native"’s
– Bacco