Actions

Difference between revisions of "SLURM-Job Scripts"

From ALICE Documentation

(Interpreter)
Line 14: Line 14:
 
In general, a job script can be split into three parts:
 
In general, a job script can be split into three parts:
  
====Interpreter====
+
===='''Line 1:''' Interpreter====
 
  #!/bin/bash
 
  #!/bin/bash
  
Line 20: Line 20:
 
*To avoid confusion, this should match your login shell.
 
*To avoid confusion, this should match your login shell.
  
====Options====
+
===='''Line 2-3: Slurm o'''ptions====
 
  #SBATCH --ntasks=8
 
  #SBATCH --ntasks=8
 
  #SBATCH --time=01:00:00
 
  #SBATCH --time=01:00:00
 
+
* Request cluster resources.
*Request cluster resources.
+
* Lines that begin with #SBATCH will be ignored by the interpreter and read by the job scheduler
*Lines that begin with #SBATCH will be ignored by the interpreter and read by the job scheduler
+
* <nowiki>#SBATCH --ntasks=<number>: specifies the number of tasks (processes) that will run in this job. In this example, 8 tasks will run.</nowiki>
*<nowiki>#SBATCH --ntasks=<number>: specifies the number of tasks (processes) that will run in this job. In this example, 8 tasks will run.</nowiki>
+
* <nowiki>#SBATCH --time=<hh:mm:ss>: sets the maximum runtime for the job. In this example, the maximum runtime is 1 hour.</nowiki>
*<nowiki>#SBATCH --time=<hh:mm:ss>: sets the maximum runtime for the job. In this example, the maximum runtime is 1 hour.</nowiki>
+
'''NOTE:''' Since 8 processor cores in total are being requested, the job will consume 8 core-hours. This is the unit of measurement that the job scheduler uses to keep track of compute time usage.
 
 
NOTE: Since 8 processor cores in total are being requested, the job will consume 8 core-hours. This is the unit of measurement that the job scheduler uses to keep track of compute time usage.
 
  
 
We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs.
 
We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs.
  
====Job commands====
+
===='''Lines 4-6:''' Job commands====
 
   
 
   
          cd /home/rcf-proj/tt1/test/  
+
cd /home/rcf-proj/tt1/test/  
          source /usr/alice/python/3.6.0/setup.sh  
+
source /usr/alice/python/3.6.0/setup.sh  
          python my.py
+
python my.py
  
 
*These lines provide the sequence of commands needed to run your job.
 
*These lines provide the sequence of commands needed to run your job.
 
*These commands will be executed on the allocated resources.
 
*These commands will be executed on the allocated resources.
*cd /home/rcf-proj/tt1/test/: Changes the working directory to /home/rcf-proj/tt1/test/
+
*'''cd /home/rcf-proj/tt1/test/''': Changes the working directory to /home/rcf-proj/tt1/test/
*source /usr/alice/python/3.6.0/setup.sh: Prepares the environment to run Python 3.6.0.
+
*'''source /usr/alice/python/3.6.0/setup.sh''': Prepares the environment to run Python 3.6.0.
*python my.py: Runs the program on the resources allocated.  In this example it runs python, specifying my.py in the current directory, /home/rcf-proj/tt1/test, as the argument.
+
*'''python my.py''': Runs the program on the resources allocated.  In this example it runs python, specifying my.py in the current directory, /home/rcf-proj/tt1/test, as the argument.

Revision as of 12:12, 8 April 2020

Job Scripts

After determining what your workflow will be and the compute resources needed, you can create a job script and submit it. To submit a script for a batch run you can use the command sbatch as in:

sbatch <job_script>

Here is a sample job script. We'll break this sample script down, line by line, so you can see how a script is put together.

#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --time=01:00:00
          
cd /home/rcf-proj/tt1/test/ 
source /usr/alice/python/3.6.0/setup.sh 
python my.py 

In general, a job script can be split into three parts:

Line 1: Interpreter

#!/bin/bash
  • Specifies the shell that will be interpreting the commands in your script. Here, the bash shell is used.
  • To avoid confusion, this should match your login shell.

Line 2-3: Slurm options

#SBATCH --ntasks=8
#SBATCH --time=01:00:00
  • Request cluster resources.
  • Lines that begin with #SBATCH will be ignored by the interpreter and read by the job scheduler
  • #SBATCH --ntasks=<number>: specifies the number of tasks (processes) that will run in this job. In this example, 8 tasks will run.
  • #SBATCH --time=<hh:mm:ss>: sets the maximum runtime for the job. In this example, the maximum runtime is 1 hour.

NOTE: Since 8 processor cores in total are being requested, the job will consume 8 core-hours. This is the unit of measurement that the job scheduler uses to keep track of compute time usage.

We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs.

Lines 4-6: Job commands

cd /home/rcf-proj/tt1/test/ 
source /usr/alice/python/3.6.0/setup.sh 
python my.py
  • These lines provide the sequence of commands needed to run your job.
  • These commands will be executed on the allocated resources.
  • cd /home/rcf-proj/tt1/test/: Changes the working directory to /home/rcf-proj/tt1/test/
  • source /usr/alice/python/3.6.0/setup.sh: Prepares the environment to run Python 3.6.0.
  • python my.py: Runs the program on the resources allocated. In this example it runs python, specifying my.py in the current directory, /home/rcf-proj/tt1/test, as the argument.