Actions

Difference between revisions of "Parallel or sequential programs?"

From ALICE Documentation

(Created page with "==Parallel or sequential programs?== ===Parallel programs=== Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are ba...")
 
Line 1: Line 1:
 
==Parallel or sequential programs?==
 
==Parallel or sequential programs?==
 
===Parallel programs===
 
===Parallel programs===
Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (“in parallel”).
+
'''Parallel computing''' is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (“in parallel”).
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing
+
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi core computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing
has become the dominant computer architecture, mainly in the form of multicore processors.
+
has become the dominant computer architecture, mainly in the form of multi core processors.
 +
 
 
The two parallel programming paradigms most used in HPC are:
 
The two parallel programming paradigms most used in HPC are:
OpenMP for shared memory systems (multithreading): on multiple cores of a single node
+
* OpenMP for shared memory systems (multi threading): on multiple cores of a single node
MPI for distributed memory systems (multiprocessing): on multiple nodes
+
* MPI for distributed memory systems (multiprocessing): on multiple nodes
 
Parallel programs are more difficult to write than sequential ones because concurrency introduces several new classes of potential software bugs, of which race conditions are the most
 
Parallel programs are more difficult to write than sequential ones because concurrency introduces several new classes of potential software bugs, of which race conditions are the most
common. Communication and synchronisation between the different subtasks are typically some
+
common. Communication and synchronization between the different sub tasks are typically some
 
of the greatest obstacles to getting good parallel program performance.
 
of the greatest obstacles to getting good parallel program performance.
  
 
===Sequential programs===
 
===Sequential programs===
 
Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single worker node. It does not become faster by just throwing more cores at it: it can only use one core.
 
Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single worker node. It does not become faster by just throwing more cores at it: it can only use one core.
 +
 
It is perfectly possible to also run purely sequential programs on the HPC.
 
It is perfectly possible to also run purely sequential programs on the HPC.
Running your sequential programs on the most modern and fastest computers in the HPC can
+
 
save you a lot of time. But it also might be possible to run multiple instances of your program
+
Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program
(e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g.,
+
(e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.
to perform a parameter sweep). This is another form of running your sequential programs in
 
parallel.
 

Revision as of 14:43, 9 April 2020

Parallel or sequential programs?

Parallel programs

Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (“in parallel”). Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi core computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multi core processors.

The two parallel programming paradigms most used in HPC are:

  • OpenMP for shared memory systems (multi threading): on multiple cores of a single node
  • MPI for distributed memory systems (multiprocessing): on multiple nodes

Parallel programs are more difficult to write than sequential ones because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different sub tasks are typically some of the greatest obstacles to getting good parallel program performance.

Sequential programs

Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single worker node. It does not become faster by just throwing more cores at it: it can only use one core.

It is perfectly possible to also run purely sequential programs on the HPC.

Running your sequential programs on the most modern and fastest computers in the HPC can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on the HPC, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.