Threads are competing execution lines in the same process (in your case, your Python program). They are competitors in the sense that they run simultaneously, but each with its own line of execution - almost as if they were different programs. The distinction is that different programs are in fact different: each has its own area of memory and the communication between them is not so simple (although it is possible by several methods, such as Sockets, Pipes, Shared Memories, etc). Threads, on the other hand, run within the same process, so they can share memory directly (and then, you need to take certain precautions, such as using traffic lights to ensure data access order and prevent corruption - one thread writing while another reads).
On a computer with only one CPU, there’s no magic: threads do not execute really at the same time. The operating system (OS) is tasked with "scaling" threads to allow everyone to run a little at a time, generating the feeling (or practical result) that they seem to be simultaneous. So the OS lets the thread A run a little bit, then passes the control to the thread B run a little bit, and then goes back to thread A, and so on... And that’s why in case you share memory and use semaphores (note that is not the case of your example program, which is quite simple) it is necessary to be careful also with blocking situations (it may be that thread A is stopped waiting for thread B to release a resource, while thread B is stopped waiting for thread A to release another resource - which generates a deadlock that crashes your entire program). On the other hand, on a computer with more than one CPU, threads can run simultaneously: one on each available and free CPU. The aforementioned competition problems may still exist, but there is better performance because the OS is easier to scale (which still needs to occur).
Well, in your program there are two threads that run the same function. Therefore, they do the exact same thing and therefore tend to take the same time in each scheduling period. Assuming a computer with only one CPU, the OS knows that it can pass control of the single processor to another thread when one of them stops processing to do something involving the hardware or simply waiting, for example. Soon, when one of your threads makes a print
, the time when printing takes to happen on the hardware the thread is doing nothing, and the OS passes the control of the single processor to the other thread run. The same happens when she does the sleep
.
As your two threads do the same thing, the result tends to be quite alternating (sometimes the order reverses, but there are always "even" executions of the two threads), as you experienced:
Carrinho : Ed 0
Carrinho : Paulo 0
Carrinho : Paulo 1.2
Carrinho : Ed 1.1
Carrinho : Ed 2.2
Carrinho : Paulo 2.4
Carrinho : Ed 3.3000000000000003
Carrinho : Paulo 3.5999999999999996
Carrinho : Paulo 4.8
Carrinho : Ed 4.4
Carrinho : Ed 5.5
Carrinho : Paulo 6.0
Carrinho : Ed 6.6
Carrinho : Paulo 7.2
Carrinho : Ed 7.699999999999999
Carrinho : Paulo 8.4
Carrinho : Ed 8.799999999999999
Carrinho : Paulo 9.6
Carrinho : Ed 9.899999999999999
Carrinho : Paulo 10.799999999999999
Carrinho : Ed 10.999999999999998
Carrinho : Paulo 11.999999999999998
Carrinho : Ed 12.099999999999998
Carrinho : Paulo 13.199999999999998
Carrinho : Ed 13.199999999999998
Carrinho : Paulo 14.399999999999997
Carrinho : Ed 14.299999999999997
Carrinho : Paulo 15.599999999999996
Carrinho : Ed 15.399999999999997
Carrinho : Paulo 16.799999999999997
Carrinho : Ed 16.499999999999996
Carrinho : Paulo 17.999999999999996
Carrinho : Ed 17.599999999999998
Carrinho : Paulo 19.199999999999996
Carrinho : Ed 18.7
Carrinho : Paulo 20.399999999999995
Carrinho : Ed 19.8
Carrinho : Paulo 21.599999999999994
Carrinho : Ed 20.900000000000002
However, try completely removing the line with the command time.sleep(0.3)
and redirect the output to a text file (run programa.py > teste.txt
, for example). Then, see the final result (it will run very quickly and you will not see anything as the result will be inside the text file). It’ll be something like:
Carrinho : Ed 0
Carrinho : Ed 1.1
Carrinho : Ed 2.2
Carrinho : Ed 3.3000000000000003
Carrinho : Ed 4.4
Carrinho : Ed 5.5
Carrinho : Ed 6.6
Carrinho : Ed 7.699999999999999
Carrinho : Ed 8.799999999999999
Carrinho : Ed 9.899999999999999
Carrinho : Ed 10.999999999999998
Carrinho : Ed 12.099999999999998
Carrinho : Ed 13.199999999999998
Carrinho : Ed 14.299999999999997
Carrinho : Ed 15.399999999999997
Carrinho : Ed 16.499999999999996
Carrinho : Ed 17.599999999999998
Carrinho : Ed 18.7
Carrinho : Ed 19.8
Carrinho : Ed 20.900000000000002
. . .
Carrinho : Paulo 0
Carrinho : Paulo 1.2
Carrinho : Paulo 2.4
Carrinho : Paulo 3.5999999999999996
Carrinho : Paulo 4.8
Carrinho : Paulo 6.0
Carrinho : Paulo 7.2
Carrinho : Paulo 8.4
Carrinho : Paulo 9.6
Carrinho : Paulo 10.799999999999999
Carrinho : Paulo 11.999999999999998
Carrinho : Paulo 13.199999999999998
Carrinho : Paulo 14.399999999999997
Carrinho : Paulo 15.599999999999996
Carrinho : Paulo 16.799999999999997
Carrinho : Paulo 17.999999999999996
Carrinho : Paulo 19.199999999999996
Carrinho : Paulo 20.399999999999995
Note how the first thread virtually printed out all of its work before the printing of the second thread produced its results. This is because writing to a file is considerably faster than writing to the screen, and so the thread that started first (Ed) gives less opportunity for the operating system to scale between it and the other. This result was the execution of the program on my computer, which has 8 processing cores (8 Cpus). There was nothing else running, so the other Cpus were certainly idle. The colleague @Caffé commented that this may be due to some difficulty of Python in redistributing threads between processors, which may be true. But perhaps the threads processing was so fast that the OS did not have time to perform any scheduling.
The fact is that regardless of whether one or more Cpus are available, there is no way to guarantee exactly the order in which the threads will be staggered, as this is in charge of the OS. Some Oses and languages allow the definition of priorities, but still the choice is in charge of the OS.
So if you really want to simulate the cars running in the same time frame, it is more common to use a single thread and take care of the scheduling of "tasks".
That’s what games, for example, do. You perform in a previously established time interval (controlling the time of each "frame" of the animation, as if it were in the cinema) and perform the iteration step of each car instance, which can be implemented via Object Orientation, for example.
The timing is done as follows. Suppose you want each interaction step to last approximately 30 milliseconds (this is merely an arbitrary choice for exemplification; I am not saying it is the best or most correct choice - it depends on other issues that are not the case now). At the step of the loop (while
), you do:
- Executes
Ed->move()
and counts the time it takes for that call to process.
- Discount the processed time of 30 milliseconds. Save to a variable
t
.
- Executes
Paulo->move()
and counts the time it takes for that call to process.
- Subtracts the processed time from the variable
t
.
- If you have any time left in the variable
t
, is a sign that the calls were very fast. Soon, you call sleep
with what is left in t
to wait for the remaining 30 milliseconds. If there is nothing left, you do not call sleep
, because this "picture" has to end immediately to try to compensate for the time spent by the movement of cars.
Final note: note that regardless of what was discussed above,
your cars move at different speeds. The first car
increases the space in "time" by 10% with each interaction, while the
second car increases the space in time by 20%*. The key point of this
explanation is that to ensure that this movement is closer to
real, you need to ensure that the interaction time intervals
for each car are always the same. Maybe this is already running away
of your concern with understanding of "threads", but I thought it was
useful explain. :)
* Actually the speed increment is fixed at 1.1 or 1.2 because
you use +=
. The increment would be percentage as described if you used
*=
.
It works! I circled here!
– Ed S
time.Sleep(0.3) is to slow down the alternation between the "cars" and thus see better.
– Ed S
It’s hard to understand what’s being asked here. It would be more interesting for you to say what problem you want to solve so that we can evaluate your solution or suggest another.
– Pablo Almeida
I made a program that tries to simulate the two-car race so that they seem to run together. They need to go 1000 meters. Is there a better way to do it? Simpler?
– Ed S
"Better" way will be subjective (and you run the risk of having the question closed as based on opinion). Simpler will depend on the point of view (in terms of lines of code? in terms of understanding the functioning? in terms of runtime?). I would suggest you take advantage of the visibility of your reward by editing the question to make your doubt clear. It seems to be more in the sense of "Why do cars seem to run together if they are in separate threads?". That is, focus on that doubt.
– Luiz Vieira
@Luiz Vieira, I would like to see another way to do the same thing, using threads
– Ed S
@Luiz Vieira, thanks for the suggestion. I edited the question!
– Ed S
from my point of view, the code is already completely simple. What might help maybe would be other examples with threads.
– Brumazzi DB
I replied, trying to take advantage of your example to explain the concept of threads. Maybe it will help.
– Luiz Vieira