Actions

Parallel or sequential programs?

From ALICE Documentation

Parallel or sequential programs?

Parallel programs

Parallel computing is a form of computation in which many calculations are carried out simultaneously. They are based on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (“in parallel”). Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi core computers having multiple processing elements within a single machine, while clusters use multiple computers to work on the same task. Parallel computing has become the dominant computer architecture, mainly in the form of multi core processors.

The two parallel programming paradigms most used in HPC are:

  • OpenMP for shared memory systems (multi threading): on multiple cores of a single node
  • MPI for distributed memory systems (multiprocessing): on multiple nodes

Parallel programs are more difficult to write than sequential ones because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different sub tasks are typically some of the greatest obstacles to getting good parallel program performance.

Sequential programs

Sequential software does not do calculations in parallel, i.e., it only uses one single core of a single worker node. It does not become faster by just throwing more cores at it: it can only use one core.

It is perfectly possible to also run purely sequential programs on ALICE.

Running your sequential programs on the most modern and fastest computers in ALICE can save you a lot of time. But it also might be possible to run multiple instances of your program (e.g., with different input parameters) on ALICE, in order to solve one overall problem (e.g., to perform a parameter sweep). This is another form of running your sequential programs in parallel.