Actions

Difference between revisions of "SLURM-Job Scripts"

From ALICE Documentation

(Created page with "==Job Scripts== After determining what your workflow will be and the compute resources needed, you can create a job script and submit it. To submit a script for a batch run yo...")
 
Line 5: Line 5:
 
Here is a sample job script. We'll break this sample script down, line by line, so you can see how a script is put together.
 
Here is a sample job script. We'll break this sample script down, line by line, so you can see how a script is put together.
 
   
 
   
          #!/bin/bash
+
#!/bin/bash
          #SBATCH --ntasks=8
+
#SBATCH --ntasks=8
          #SBATCH --time=01:00:00
+
#SBATCH --time=01:00:00
 
            
 
            
          cd /home/rcf-proj/tt1/test/  
+
cd /home/rcf-proj/tt1/test/  
          source /usr/alice/python/3.6.0/setup.sh  
+
source /usr/alice/python/3.6.0/setup.sh  
          python my.py  
+
python my.py  
 
In general, a job script can be split into three parts:
 
In general, a job script can be split into three parts:
  
 
====Interpreter====
 
====Interpreter====
  <nowiki>
+
  #!/bin/bash
            #!/bin/bash</nowiki>
 
  
 
*Specifies the shell that will be interpreting the commands in your script. Here, the bash shell is used.
 
*Specifies the shell that will be interpreting the commands in your script. Here, the bash shell is used.
Line 22: Line 21:
  
 
====Options====
 
====Options====
  <nowiki>
+
  #SBATCH --ntasks=8
            #SBATCH --ntasks=8
+
#SBATCH --time=01:00:00
            #SBATCH --time=01:00:00</nowiki>
 
  
 
*Request cluster resources.
 
*Request cluster resources.

Revision as of 12:09, 8 April 2020

Job Scripts

After determining what your workflow will be and the compute resources needed, you can create a job script and submit it. To submit a script for a batch run you can use the command sbatch as in:

sbatch <job_script>

Here is a sample job script. We'll break this sample script down, line by line, so you can see how a script is put together.

#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --time=01:00:00
          
cd /home/rcf-proj/tt1/test/ 
source /usr/alice/python/3.6.0/setup.sh 
python my.py 

In general, a job script can be split into three parts:

Interpreter

#!/bin/bash
  • Specifies the shell that will be interpreting the commands in your script. Here, the bash shell is used.
  • To avoid confusion, this should match your login shell.

Options

#SBATCH --ntasks=8
#SBATCH --time=01:00:00
  • Request cluster resources.
  • Lines that begin with #SBATCH will be ignored by the interpreter and read by the job scheduler
  • #SBATCH --ntasks=<number>: specifies the number of tasks (processes) that will run in this job. In this example, 8 tasks will run.
  • #SBATCH --time=<hh:mm:ss>: sets the maximum runtime for the job. In this example, the maximum runtime is 1 hour.

NOTE: Since 8 processor cores in total are being requested, the job will consume 8 core-hours. This is the unit of measurement that the job scheduler uses to keep track of compute time usage.

We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs.

Job commands

         cd /home/rcf-proj/tt1/test/ 
         source /usr/alice/python/3.6.0/setup.sh 
         python my.py
  • These lines provide the sequence of commands needed to run your job.
  • These commands will be executed on the allocated resources.
  • cd /home/rcf-proj/tt1/test/: Changes the working directory to /home/rcf-proj/tt1/test/
  • source /usr/alice/python/3.6.0/setup.sh: Prepares the environment to run Python 3.6.0.
  • python my.py: Runs the program on the resources allocated. In this example it runs python, specifying my.py in the current directory, /home/rcf-proj/tt1/test, as the argument.