Difference between revisions of "Applications/Savu"

From HPC
Jump to: navigation , search
(Batch Submission)
(Batch Submission)
Line 52: Line 52:
 
module purge
 
module purge
 
module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
 
module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
# the following line will calculate the total number of CPUs and place it in CPUs variable
+
 
 +
#This line will calculate the total number of CPUs and place it in CPUs variable
 
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
 
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
 
echo " will run on $CPUs CPUs"
 
echo " will run on $CPUs CPUs"
Line 65: Line 66:
 
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
 
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
 
*/apps/savu/test --> this is the output directory
 
*/apps/savu/test --> this is the output directory
*28 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
+
*$CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
[username@login01 ~]$ sbatch savu.job
 
[username@login01 ~]$ sbatch savu.job
 
Submitted batch job 289522
 
Submitted batch job 289522
 
</pre>
 
</pre>

Revision as of 15:58, 16 August 2017

Application Details

  • Description: R is an open source programming language and software environment for statistical computing and graphics
  • Version: 2.0
  • Modules: savu/2.0/gcc-5.2.0/openmpi-2.0.2

Usage Examples

Interactive

this mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.

Note: savu supports interactive mode (as below) and execution mode running as an interpreted script.


[username@login01 ~]$ interactive
salloc: Granted job allocation 402968
Job ID 402968 connecting to c059, please wait...
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
[username@c059 ~]$ module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
[username@c059~]$ savu_config

or


[username@c059~]$  savu /Path/To_input/Data/file.nxs /List/Of_processes/To_be_performed/process.nxs /path /to/output/directory

Example: this will run basic_tomo_process.nxs on file 24737.nxs and place the output in your current directory under results


[username@c059~]$  savu /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/data/24737.nxs /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/test_process_lists/basic_tomo_process.nxs ./results

Batch Submission

Once you are happy with the results you got. you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.

#!/bin/bash
#SBATCH -J My_R_job        # Job name, you can change it to whatever you want
#SBATCH -N 1               # Number of nodes 
#SBATCH -n 28              # Number of cores
#SBATCH -o %N.%j.out       # Standard output will be written here
#SBATCH -e %N.%j.err       # Standard error will be written here
#SBATCH -p compute         # Slurm partition, where you want the job to be queued 
#SBATCH --exclusive        # run on one node without any other users

 
module purge
module load savu/2.0/gcc-5.2.0/openmpi-2.0.2

#This line will calculate the total number of CPUs and place it in CPUs variable
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
echo " will run on $CPUs CPUs"

savu_mpijob.sh /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg ./apps/savu/test/24737.nxs  ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0

Break down on each of the above items in the savu run command:

  • savu_mpijob.sh -->this is the main run command
  • /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg --> this is the path of Savu installtion, just leave it as it is
  • /apps/savu/test/24737.nxs --> this is the input file, change it to your input file path
  • /apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
  • /apps/savu/test --> this is the output directory
  • $CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
[username@login01 ~]$ sbatch savu.job
Submitted batch job 289522