Parallel computing
Julia supports the following four categories of parallel programming:
-
Asynchronous "tasks" or coroutines.
Tasks in Julia allow you to pause and resume calculations related to I/O, event processing, producer-consumer processes and similar patterns. Tasks can be synchronized through operations such as
wait
andfetch
, and exchange data through channelsChannel
. Although, strictly speaking, this in itself is not a parallel computation, Julia allows you to schedule the execution of objectsTask
in multiple threads. -
Multithreading.
Multithreading in Julia makes it possible to schedule simultaneous execution of tasks in multiple threads or CPU cores with shared memory usage. This is usually the easiest way to ensure parallelism on your own computer or on one large multi-core server. Multithreading in Julia is composite. When one multithreaded function calls another, Julia schedules the execution of all threads globally, taking into account the available resources without exceeding them.
-
Distributed computing.
Distributed computing provides execution of several Julia processes with separate memory spaces. Processes can run on one computer or on several. The Standard Library
Distributed
enables remote execution of the Julia function. Based on this standard block, you can build many different abstractions of distributed computing. An example of such an abstraction is a package https://github.com/JuliaParallel/DistributedArrays.jl [DistributedArrays.jl
]. In turn, packages such as https://github.com/JuliaParallel/MPI.jl [MPI.jl
] and https://github.com/JuliaParallel/Elemental.jl [Elemental.jl
], provide access to the existing ecosystem of MPI libraries. -
GPU computing:
The GPU Julia compiler makes it possible to execute Julia code as a machine on the GPU. There is a developed ecosystem of Julia packages designed for GPUs. On the website https://juliagpu.org [JuliaGPU.org ] provides a list of features supported by the GPU, related packages and documentation.