What is the difference between async, multithereading, parallelism and competition?
Async
It is the same as asynchronous, this word can be a bit confusing, but it means that the task will not be executed immediately, it will be scheduled and executed when there are resources and it is convenient.
Multithreading
It is the act of executing multiple Threads, which are fundamentally blocks of code. It is only possible to multithreading with parallelism.
Parallelism
It is the act of executing processes "at the same time".
The parallelism can be:
By division of time: A processor runs a process for a "quanta" of time and switches to another process, and this repeats for processes that are scheduled by the operating system. In this kind of parallelism there is an illusion that they are being executed at the same time. When not using all processor time and it is possible to parallelize tasks in this way.
With multiple processors (real parallelism): Each processor runs a process, all (the processors) at the same time, usually with shared memory, this can cause bottleneck in memory, if its use is intense. There are solutions like Multiple Channel
where it is possible to access the memory also in parallel, decreasing the bottleneck. Each processor still uses time division.
Competition
It is when there is parallelism, the word competition is an analogy to "dispute over resources".
It is the role of the supervisor (operating system) to manage resources at your convenience.
Note: Virtual machines are called Hypervisors because they are supervisors of supervisors.
They depend on the amount of processor cores?
Certainly yes, operating systems in general are multi-tasking, support multi-core processors, and manage the use of these resources.
Note: Hyperthreading is a technology where the operating system "sees" more processors than actually exist, what is the explanation? These processors have a technology that allows tasks to be scheduled very efficiently, strictly speaking, the processors that the operating system "sees" are actually process schedulers. Applications that make intense use of multithreading with many schedules gain some performance with this (servers that serve many clients is an example).
If I do a program in Visual Basic and open 33 instances of it, it would be running in parallel?
Yes, the operating system does this job for you if it opens more than one instance, that’s how it is able to run more than one application at the same time.
... would be 33 times faster?
No. Because runtime is inversely proportional to the number of real processors.
This, considering that your program uses the processor intensely.
In fact, the 33 processes would be disputing the time of the real processors.
When using many threads there is no real gain as the system will spend more time scheduling tasks and performing resource access locks (mutex).
Even Hyperthreading doesn’t help this case.
... would be better than a program running once using async in C#?
Just using async does not mean anything, it is necessary to implement the 33 tasks in the same program, even if you do, it will not make a big difference, the advantage of doing this is sharing data in the same process. (without using sockets or IPC or shared memory, which causes greater complexity in development and debugging).
There is also the concept of cluster computacional
, which consists of using several machines working in parallel each with its operating system. This is the type of parallelism more "fast", but it is the most expensive and difficult to maintain synchrony between processes.
Graphics cards are an example of architecture that uses well the parallelism, for example an average graphics processor (for current standards has clock order of 1GHz, is smaller than a CPU, but they have many cores.
Despite being a different architecture, it is possible to use the GPU as a processor, and take advantage of its computational power.
https://pt.wikipedia.org/wiki/OpenCL
https://pt.wikipedia.org/wiki/CUDA
It is worth remembering that it takes an analysis to know if its processing is parallelizable, that is, if there is dependency between intermediate results.
ok, thanks for the various links I will still study and understand your entire answer, but remember that the download takes longer for network limitations than CPU, so in this case a parallelism to the extreme I think is worth, even having 4 cores a pc.
– Dorathoto
In this case the parallelism will make little or no difference. The asynchronicity is the important.
– Maniero