SBATCH Options
Overview
The following table presents various options for SBATCH and SRUN. These can be used within job scripts or directly from the command line.
Common SBATCH and SRUN Options
SBATCH and SRUN Options |
||
Flag Syntax |
Description |
Notes |
--time=0-01:00:00 |
Requested walltime |
Default=0-00:01:00 |
--nodes=2 |
Number of nodes for job |
Default=1 |
--job-name=myJob |
Name of your job |
Optional |
--ntasks=8 |
Number of parallel tasks to run |
For MPI jobs and job steps, Default=1 |
--ntasks-per-node=2 |
Number of tasks to start on each node |
Default=1 |
--mem=1gb |
Memory requested for the job |
No default |
--cpus-per-task=2 |
Number of cpu cores requested per task |
For multi-threaded jobs, Default=1 |
--mem-per-cpu=1gb |
Memory to use per cpu core requested |
No default |
--account=someLab |
Requests access to private account partition |
Optional |
--array=1-5 |
Used for job array tasks |
Example: 5 array tasks numbered 1-5 |
--output=test.out |
Specifies stdout file |
Default is slurm-<jobid>.out |
--partition=general-short |
Requests specific SLURM node partition |
Optional |
--constraint=gpu |
Requests nodes with a particular feature |
Optional |
--mail-user=<address> |
Specify the email address to send job status |
Optional |
--mail-type=ALL |
Specify when to send job-related email |
Must be used with --mail-user |
--exclusive |
At the job-level will not share the node w/ other jobs |
Optional |
References
CategoryHPC