General/Batch
Contents
Introduction
Example batch scripts
Basic batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
Exclusive batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued #SBATCH --exclusive # Request exclusive access to a node (all 28 cores, 128GB of RAM) module purge module add modulename command
Intel MPI parallel batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -n 1 # Number of nodes #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
MVAPICH parallel batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -n 1 # Number of nodes #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
OpenMPI parallel batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -n 1 # Number of nodes #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
High memory batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
GPU batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
Array batch job
An array batch job allows multiple jobs to be executed with identical parameters based on a single job submission. By using the directive #SBATCH --array 1-10 the same job will be run 10 times. The indexes specification identifies what array index values should be used. Multiple values may be specified using a comma separated list and/or a range of values with a "-" separator. For example, "--array=0-15" or "--array=0,6,16-32".
A step function can also be specified with a suffix containing a colon and number. For example, "--array=0-15:4" is equivalent to "--array=0,4,8,12". A maximum number of simultaneously running tasks from the job array may be specified using a "%" separator. For example "--array=0-15%4" will limit the number of simultaneously running tasks from this job array to 4.
The variable $SLURM_ARRAY_TASK_ID can be used within the batch script, being replaced by the index of the job, for example as part of the input or data filename etc.
When the batch script below is submitted, 10 jobs will run resulting the in command being run with the first argument corresponding to the array element of that task, for instance: command 1, command 2, through to command 10. The output of each of these tasks will be logged to a different out and err file, with the format <node job ran on>.<job ID>.<array index>.out <node job ran on>.<job ID>.<array index>.err
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%A.%a.out # Standard output will be written here #SBATCH -e %N.%A.%a.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued #SBATCH --array 1-10 module purge module add modulename command $SLURM_ARRAY_TASK_ID