There is no restriction on using multiprocessing from within a process created by multiprocessing itself. But your system is doing exactly what you tell him to do:
From inside the calc_cube
, he sleeps a second, prints once the cube, starts the process calc_square
and waits for the calc_square to finish. As this other one stays in an infinite loop and never ends, only he will print answers. What is the surprise?
If you reorder the instructions inside the calc_cube to make sense: create a single calc_square process (and don’t try to create a new process with each loop run), and keep printing your results without worrying about waiting for the other process, will see the two impressions happening simultaneously:
def calc_cube():
p2 = multiprocessing.Process(target=calc_square)
p2.start() # inicia uma única instância do outro processo
while True:
time.sleep(1)
print("cube: %i" %(2*2*2))
p2.join() # essa linha nunca será atingida - pois está fora do while. Se o código mudar para o calc_cube ter um fim, aí sim ela faz sentido.
Ready - this will work and you can test on the PC - do not need to bother to put on the rasp.
Now - you have to keep in mind that you’re not going to have many advantages of using strategies of this kind - multiprocessing is cool if you have multiple-core Cpus, and yet, up to the limit of 1 process per kernel, for algorithms with intense calculation, or 2 processes per core, "bursting". The operating system features that processes use are very large and you will have advantages, especially on a single-core machine, if you use asyncio - a single thread quickly switching between code that always has something to do, while other parts of the code await answers from I/O.
Here I wrote an answer where I extensively address the topic of code parallelization in Python: What is Global Interpreter Lock (GIL)?