Size: 2925
Comment:
|
Size: 2907
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 69: | Line 69: |
}}} {{{ }}} {{{ |
Job Scripts
Contents
Serial Single Threaded
This example illustrates a job script designed to run a simple single-threaded processes on a single compute node:
#SBATCH --job-name=mySerialjob #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=1 #SBATCH --time=0-00:20:00 #SBATCH --mem=3102 module load someApp someApp
Explanation
A single process run only requires 1 node, as well as 1 cpu and a single task. These are reflected in the example script. We change to the same directory from where we submitted the job ( ${SLURM_SUBMIT_DIR} to produce our output. Then we load the module "someApp" and execute it.
Multi-Threaded Single Node
In this example we are running an application capable of utilizing multiple process threads on a single node (BLAST):
#SBATCH --job-name=myBLASTjob #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=8 #SBATCH --time=0-01:00:00 #SBATCH --mem=3102 cd ${SLURM_SUBMIT_DIR} module load BLAST blastn --num_threads 8 <...>
Explanation
In this case we still have a single task (our blastn run) but we require 8 cpu cores to accommodate the 8 threads we've specified on the command line. The ellipses between the angle brackets represents the balance of our command line arguments.
Multiple Serial Jobs
Here we demonstrate it is possible to run multiple copies of the same application, and leverage SLURM's "srun" command to distribute tasks on multiple nodes:
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=2 #SBATCH --cpus-per-task=1 #SBATCH --time=0-01:00:00 #SBATCH --mem=3102 module load someApp srun -n 2 python myScript.py & srun -n 2 someApp & wait
Explanation
We specify 2 nodes and 2 tasks per node (total 4 tasks). The "srun" command is used to direct that 2 copies of each application should be run. srun works with SLURM to launch and schedule each task across our assigned nodes. The ampersand (&) causes each task to be run "in the background" so that all tasks may be launched in parallel and are not blocked waiting for other tasks to complete. The "wait" directive tells SLURM to wait until all background tasks are completed.
MPI Jobs
This is an example of a job script that runs a single MPI application across multiple nodes with distributed memory. It is recommended to use "srun" instead of "mpirun":
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=14 #SBATCH --ntasks=28 #SBATCH --cpus-per-task=1 #SBATCH --mem-per-cpu=1G #SBATCH --time=0-10:00:00 srun ./my_mpi_app
Explanation
Two nodes are assigned with 14 tasks per node (28 tasks total). One GB of RAM is allocated per CPU and srun is used to launch our MPI-based application.
CategoryHPC