Difference between revisions of "General/Batch"
m |
m |
||
Line 156: | Line 156: | ||
</pre> | </pre> | ||
− | * if you require more than one GPU to be used within a single node use the ''sbatch'' command '''#SBATCH --gres=gpu:N''' (where '''N''' is the number required). In some cases the GPU family name should also be included ''#SBATCH --gres=gpu:tesla:2'' (2 GPUs). | + | * if you require more than one GPU to be used within a single node use the ''sbatch'' command '''#SBATCH --gres=gpu:N''' (where '''N''' is the number required). In some cases the GPU family name should also be included '''#SBATCH --gres=gpu:tesla:2''' (''2 GPUs''). |
==== Array batch job ==== | ==== Array batch job ==== |
Revision as of 12:08, 19 February 2018
Contents
Introduction
Viper uses the Slurm job scheduler to provide access to compute resource. The scheduler
A submission script is a file that provides information to Slurm about the task you are running so that it can be allocated to the appropriate resource, then sets up the environment so the task can run. A minimal submission script has three main components:
- A set of directives, staring with #SBATCH, which tell the scheduler about the job such as information about resource required, job name and job log and error files. Although in a normal BASH script anything starting with a '#' would indicate a comment, however the SLURM interpreter will recognise this as a command to pass to the scheduler.
- Information about how the job environment should be set up, for example what application modules should be loaded.
- The actual command(s) that need to be run.
Linux and Slurm do not care about the name used for a submission script, however for ease of support, we would recommend you call your submission script a relevant name with a .job suffix, for example MATLABtest.job
Example batch scripts
The following are a set of basic job submission scripts along with relevant additional information on how these can be adjusted to suit the task.
To submit a job, use the sbatch jobscript.job for example:
[username@login01 ~]$ sbatch MATLABtest.job Submitted batch job 289522
Basic batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add modulename command
Exclusive batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued #SBATCH --exclusive # Request exclusive access to a node (all 28 cores, 128GB of RAM) module purge module add modulename command
Examples of where exclusive access is useful:
Parallel batch jobs
Intel MPI parallel batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 28 # Number of cores #SBATCH -N 4 # Number of nodes #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add intel/2017 mpirun command
MVAPICH parallel batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 28 # Number of cores #SBATCH -N 4 # Number of nodes #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add mvapich2/2.2/gcc-6.3.0 mpirun command
There are various options for the mvapich2 module, see mvapich2
OpenMPI parallel batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 28 # Number of cores #SBATCH -N 4 # Number of nodes #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued module purge module add openmpi/1.10.5/gcc-6.3.0 mpirun command
There are various options for the OpenMPI module, see openmpi
High memory batch job
If you task requires more memory than the standard provision (approximately 4GB) then you need to include a directive in your submission script to request the appropriate resource. The standard compute nodes have 128GB of RAM available and there are dedicated high memory nodes which have a total of 1TB of RAM. If your job requires more than 128GB of RAM, then submit to the highmem partition.
The following job submission script runs on the highmem partition and uses the #SBATCH --mem flag to request 500GB of RAM, which results in three things:
- The job will only be allocated to a node with this much memory available
- No other jobs will be allocated to this node unless their memory requirements fit in with the remaining available memory
- If the job exceeds this requested value, the task will terminate
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p highmem # Slurm partition, where you want the job to be queued #SBATCH --mem=500G module purge module add modulename command
If a job exceeds the requested about of memory, it will terminate with an error message similar to the following (a job which ran with a memory limit of 2GB):
slurmstepd: Step 307110.0 exceeded memory limit (23933492 > 2097152), being killed srun: Job step aborted: Waiting up to 32 seconds for job step to finish. srun: got SIGCONT slurmstepd: Exceeded job memory limit
GPU batch job
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH --gres=gpu # use the GPU resource not the CPU #SBATCH -p gpu # Slurm partition, where you want the job to be queued module purge module add modulename command
- if you require more than one GPU to be used within a single node use the sbatch command #SBATCH --gres=gpu:N (where N is the number required). In some cases the GPU family name should also be included #SBATCH --gres=gpu:tesla:2 (2 GPUs).
Array batch job
An array batch job allows multiple jobs to be executed with identical parameters based on a single job submission. By using the directive #SBATCH --array 1-10 the same job will be run 10 times. The indexes specification identifies what array index values should be used. Multiple values may be specified using a comma separated list and/or a range of values with a "-" separator. For example, "--array=0-15" or "--array=0,6,16-32".
A step function can also be specified with a suffix containing a colon and number. For example, "--array=0-15:4" is equivalent to "--array=0,4,8,12". A maximum number of simultaneously running tasks from the job array may be specified using a "%" separator. For example "--array=0-15%4" will limit the number of simultaneously running tasks from this job array to 4.
The variable $SLURM_ARRAY_TASK_ID can be used within the batch script, being replaced by the index of the job, for example as part of the input or data filename etc.
When the batch script below is submitted, 10 jobs will run resulting the in command being run with the first argument corresponding to the array element of that task, for instance: command 1, command 2, through to command 10. The output of each of these tasks will be logged to a different out and err file, with the format <node job ran on>.<job ID>.<array index>.out <node job ran on>.<job ID>.<array index>.err
#!/bin/bash #SBATCH -J jobname # Job name, you can change it to whatever you want #SBATCH -n 1 # Number of cores #SBATCH -o %N.%A.%a.out # Standard output will be written here #SBATCH -e %N.%A.%a.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued #SBATCH --array 1-10 module purge module add modulename command $SLURM_ARRAY_TASK_ID
Advanced Options
- Job time (to be added)
- Dependencies (to be added)
- Reservations (to be added)