Difference between revisions of "Generic resource requirements"
From ALICE Documentation
(Tag: Visual edit) |
|||
Line 11: | Line 11: | ||
The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be “killed” (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory. | The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be “killed” (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory. | ||
$ sbatch --ntasks=5 | $ sbatch --ntasks=5 | ||
− | $ sbatch --ntasks-per-node=2 | + | $ sbatch --ntasks-per-node=2 |
The job requests 5 compute nodes with two cores on each node ( “processors per node”, where "processors" here actually means "CPU cores"). | The job requests 5 compute nodes with two cores on each node ( “processors per node”, where "processors" here actually means "CPU cores"). | ||
Line 19: | Line 19: | ||
$ sbatch --ntasks=1, | $ sbatch --ntasks=1, | ||
− | |||
$sbatch --mem==2gb | $sbatch --mem==2gb | ||
− | |||
$sbatch fibo.sh | $sbatch fibo.sh | ||
or in the job script itself using the #PBS-directive, so “fibo.pbs” could be modified to: | or in the job script itself using the #PBS-directive, so “fibo.pbs” could be modified to: |
Revision as of 11:51, 17 April 2020
Generic resource requirements
The qsub command takes several options to specify the requirements, of which we list the most commonly used ones below.
$ sbatch --time==2:30:00
Forthesimplestcases, onlytheamountofmaximumestimatedexecutiontime(called“walltime”) is really important. Here, the job requests 2 hours, 30 minutes. As soon as the job exceeds the requested walltime, it will be “killed” (terminated) by the job scheduler. There is no harm if you slightly overestimate the maximum execution time. If you omit this option, the queue manager will not complain but use a default value (one hour on most clusters). If you want to run some final steps (for example to copy files back) before the walltime kills your main process, you have to kill the main command yourself before the walltime runs out and then copy the file back. See section
Running a command with a maximum time limit for how to do this.
$ sbatch --mem=4gb
The job requests 4 GB of RAM memory. As soon as the job tries to use more memory, it will be “killed” (terminated) by the job scheduler. There is no harm if you slightly overestimate the requested memory.
$ sbatch --ntasks=5 $ sbatch --ntasks-per-node=2
The job requests 5 compute nodes with two cores on each node ( “processors per node”, where "processors" here actually means "CPU cores").
$ sbatch --ntasks=1:westmere
The job requests just one node, but it should have an Intel Westmere processor. A list with site-specific properties can be found in the next section or in the User Portal (“VSC hardware” section)1 of the VSC website. These options can either be specified on the command line, e.g.
$ sbatch --ntasks=1, $sbatch --mem==2gb $sbatch fibo.sh
or in the job script itself using the #PBS-directive, so “fibo.pbs” could be modified to:
#!/bin/bash -l #sbatch --ntasks=1 #sbatch --mem=2gb cd $SLURM_SUBMIT_DIR ./fibo.sh
Note that the resources requested on the command line will override those specified in the Slurm file.