- Description: MIKE
- Version: 2017
- Modules: Private module available on request to appropriate license holders
- Licence: Limited license (The Energy and Environment Institute)
#!/bin/bash #SBATCH -J MIKEjob #SBATCH -N 16 #SBATCH --ntasks-per-node=28 #SBATCH -o %j.log #SBATCH -e %j.err #SBATCH -p compute #SBATCH --time=0-06:00:00 module add use.own module add MIKE/2017 export OMP_NUM_THREADS=1 NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" mpirun -n $NP FemEngineHD model.m21fm
Note the sample job above includes a 6 hour time limit.
[user@login01 ~]$ sbatch MIKE.job Submitted batch job 1209671
It is useful to carry out scaling tests when running MIKE so that appropriate resource is requested to run a job. While requesting fewer nodes will mean the longer the job will run for, it is more likely that a job will start sooner the fewer nodes it requests. The sample job script above requests 16 nodes (448 cores).
The table below shows runtimes for a test job over different numbers of nodes. For this example, the 3 minutes gained using 32 nodes over 16 nodes, provides little benefit and other users would be better having access to the 16 additional nodes. With contention for resource on Viper high, requesting 8 nodes would be a good compromise.
It is useful to carry out scaling tests with any significant change in job type.