Difference between revisions of "SLURM-Job Scripts"
From ALICE Documentation
(→Line 2-3: Slurm options) |
|||
(2 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
− | ===Job | + | ===Job scripts=== |
+ | |||
+ | '''<font color="orange">The content of this section is deprecated and is waiting revision.</font>''' | ||
+ | |||
After determining what your workflow will be and the compute resources needed, you can create a job script and submit it. To submit a script for a batch run you can use the command sbatch as in: | After determining what your workflow will be and the compute resources needed, you can create a job script and submit it. To submit a script for a batch run you can use the command sbatch as in: | ||
sbatch <job_script> | sbatch <job_script> | ||
Line 14: | Line 17: | ||
In general, a job script can be split into three parts: | In general, a job script can be split into three parts: | ||
− | ==== | + | ====Line 1: Interpreter==== |
#!/bin/bash | #!/bin/bash | ||
Line 31: | Line 34: | ||
We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs. | We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs. | ||
− | ==== | + | ====Lines 4-6: Job commands==== |
cd /home/rcf-proj/tt1/test/ | cd /home/rcf-proj/tt1/test/ |
Latest revision as of 10:26, 25 January 2021
Contents
Job scripts
The content of this section is deprecated and is waiting revision.
After determining what your workflow will be and the compute resources needed, you can create a job script and submit it. To submit a script for a batch run you can use the command sbatch as in:
sbatch <job_script>
Here is a sample job script. We'll break this sample script down, line by line, so you can see how a script is put together.
#!/bin/bash #SBATCH --ntasks=8 #SBATCH --time=01:00:00 cd /home/rcf-proj/tt1/test/ source /usr/alice/python/3.6.0/setup.sh python my.py
In general, a job script can be split into three parts:
Line 1: Interpreter
#!/bin/bash
- Specifies the shell that will be interpreting the commands in your script. Here, the bash shell is used.
- To avoid confusion, this should match your login shell.
Line 2-3: Slurm options
#SBATCH --ntasks=8 #SBATCH --time=01:00:00
- Request cluster resources.
- Lines that begin with #SBATCH will be ignored by the interpreter and read by the job scheduler
- #SBATCH --ntasks=<number>: specifies the number of tasks (processes) that will run in this job. In this example, 8 tasks will run.
- #SBATCH --time=<hh:mm:ss>: sets the maximum runtime for the job. In this example, the maximum runtime is 1 hour.
NOTE: Since 8 processor cores in total are being requested, the job will consume 8 core-hours. This is the unit of measurement that the job scheduler uses to keep track of compute time usage.
We recommend that you use #SBATCH --export=NONE to establish a clean environment, otherwise, Slurm will propagate current environmental variables to the job. This could impact the behaviour of the job, particularly for MPI jobs.
Lines 4-6: Job commands
cd /home/rcf-proj/tt1/test/ source /usr/alice/python/3.6.0/setup.sh python my.py
- These lines provide the sequence of commands needed to run your job.
- These commands will be executed on the allocated resources.
- cd /home/rcf-proj/tt1/test/: Changes the working directory to /home/rcf-proj/tt1/test/
- source /usr/alice/python/3.6.0/setup.sh: Prepares the environment to run Python 3.6.0.
- python my.py: Runs the program on the resources allocated. In this example it runs python, specifying my.py in the current directory, /home/rcf-proj/tt1/test, as the argument.