Actions

SLURM-Example of a simple MPI script: Hello World MPI

From ALICE Documentation

Example of a simple MPI script: Hello World MPI

This is an example of a simple MPI program that runs on multiple processors. It demonstrates the use of Slurm's interactive mode and ALICE's OpenMP setup.

__HelloWorldMPI.c__

  1 #include "mpi.h"
  2 #include
  3 #include
  4
  5 int main (int argc, char *argv[])
  6 {
  7 int i, rank, size, namelen;
  8 char name [MPI_MAX_PROCESSOR_NAME];
  9 
  10 MPI_Init (&argc, &argv);
  11
  12 MPI_Comm_size (MPI_COMM_WORLD, &size);
  13 MPI_Comm_rank (MPI_COMM_WORLD, &rank);
  14 MPI_Get_processor_name (name, &namelen);
  15
  16 printf ("Hello World from rank %d running on %s!\n", rank, name);
  17
  18 if (rank == 0 )
  19 printf ("MPI World size = %d processes\n", size);
  20
  21 MPI_Finalize ();
  22 
  23 }              

You will need to source the OpenMP software based on your shell, compile and test the code. Here is an example using the copy command in a bash shell and testing in your home directory.

  [me@nodelogin01~]$ cp /home/rcf-proj/workshop/introSLURM/helloMPI/helloWorldMPI.c ~
  [me@nodelogin01~]$ source /usr/alice/openmp/setup.sh
  [me@nodelogin01~]$ mpicc -o helloWorldMPI helloWorldMPI.c
  [me@nodelogin01~]$ ls -l helloWorldMPI
  -rwxr-xr-x 1 user nobody 8800 Feb 21 14:32 helloWorldMPI
  [me@nodelogin01~]$ salloc --ntasks=30  
  ----------------------------------------
  Begin SLURM Prolog Wed 21 Feb 2018 02:34:35 PM PST
  Job ID:        767
  Username:      user
  Accountname:   lc_alice1
  Name:          bash
  Partition:     quick
  Nodes:         node[001,007]
  TasksPerNode:  15(x2)
  CPUSPerTask:   Default[1]
  TMPDIR:        /tmp/767.quick
  Cluster:       alice
  HSDA Account:  false
  End SLURM Prolog
  ----------------------------------------
  [me@node015~]$ source /usr/alice/openmp/setup.sh
  [me@node015~]$ srun --ntasks=30 --mpi=pmi2 ./helloWorldMPI
  Hello World from rank 10 running on node001!
  Hello World from rank 19 running on node002!
  Hello World from rank 11 running on node003!
  Hello World from rank 3 running on node004!
  Hello World from rank 17 running on node005!
  Hello World from rank 4 running on node006!
  Hello World from rank 7 running on node007!
  Hello World from rank 2 running on node008!
  Hello World from rank 12 running on node009!
  Hello World from rank 21 running on node010!
  Hello World from rank 26 running on node011!
  Hello World from rank 9 running on node012!
  Hello World from rank 13 running on node013!
  Hello World from rank 22 running on node014!
  Hello World from rank 6 running on node015!
  Hello World from rank 5 running on node016!
  Hello World from rank 20 running on node017!
  Hello World from rank 15 running on node018!
  Hello World from rank 18 running on node019!
  Hello World from rank 14 running on node020!
  Hello World from rank 23 running on node851!
  Hello World from rank 28 running on node852!
  Hello World from rank 8 running on node0853!
  Hello World from rank 27 running on node0854!
  Hello World from rank 16 running on node0855!
  Hello World from rank 25 running on node0856!
  Hello World from rank 1 running on node857!
  Hello World from rank 29 running on node858!
  Hello World from rank 24 running on node859!
  Hello World from rank 0 running on node860!
  MPI World size = 30 processes
  [me@node015~]$ logout
  salloc: Relinquishing job allocation 767
  [me@nodelogin01~]$        

The srun command runs the helloWorldMPI program on 30 tasks. Slurm provides information about the job. Most of the information is self-explanatory. Only 1 cpu was used per task, and the job ran across 2 nodes. Note that for multi-node jobs, the number of tasks per node lines up with the nodes utilized by the job. In this example, 22 tasks were run on node014, while 8 were run on node853.