Difference between revisions of "Applications/Savu"

From HPC
Jump to: navigation , search
(Batch Submission)
(Batch Submission)
Line 65: Line 65:
 
echo " will run on $CPUs CPUs"
 
echo " will run on $CPUs CPUs"
  
savu_mpijob.sh /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg ./apps/savu/test/24737.nxs  ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0
+
savu_mpijob.sh ./apps/savu/test/24737.nxs  ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0
 
</pre>
 
</pre>
 
Break down on each of the above items in the savu run command:
 
Break down on each of the above items in the savu run command:
  
 
*savu_mpijob.sh  -->this is the main run command
 
*savu_mpijob.sh  -->this is the main run command
*/trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg --> this is the path of Savu installtion, just leave it as it is
 
 
*/apps/savu/test/24737.nxs  --> this is the input file, change it to your input file path
 
*/apps/savu/test/24737.nxs  --> this is the input file, change it to your input file path
 
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
 
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path

Revision as of 11:11, 17 August 2017

Application Details

  • Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data.
  • Version: 2.0
  • Modules: savu/2.0/gcc-5.2.0/openmpi-2.0.2

Usage Examples

Interactive

this mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.

Note: savu supports interactive mode (as below) and execution mode running as an interpreted script.


[username@login01 ~]$ interactive
salloc: Granted job allocation 402968
Job ID 402968 connecting to c059, please wait...
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
[username@c059 ~]$ module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
[username@c059~]$ savu_config

or


[username@c059~]$  savu /Path/To_input/Data/file.nxs /List/Of_processes/To_be_performed/process.nxs /path /to/output/directory

Example:

  • You will find some sample data files in this path /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/data
  • You will file list of process files in this path /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/test_process_lists

Note: you must copy the required files to your working directory

This will run basic_tomo_process.nxs on file 24737.nxs and place the output in your current directory under results


[username@c059~]$  savu /home/MyUserID/Test_savu/data/24737.nxs /home/MyUserID/Test_savu/test_process_lists/basic_tomo_process.nxs /home/MyUserID/Test_savu/results

Batch Submission

Once you are happy with the results you got. you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.

Note: the Current installtion suns only on compute node. the GPU version will be installed later

#!/bin/bash
#SBATCH -J My_Savu_job     # Job name, you can change it to whatever you want
#SBATCH -N 2               # Number of nodes 
#SBATCH -n 28              # Number of cores
#SBATCH -o %N.%j.out       # Standard output will be written here
#SBATCH -e %N.%j.err       # Standard error will be written here
#SBATCH -p compute         # Slurm partition, where you want the job to be queued 
#SBATCH --exclusive        # run on one node without any other users

 
module purge
module load savu/2.0/gcc-5.2.0/openmpi-2.0.2

#This line will calculate the total number of CPUs and place it in CPUs variable
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
echo " will run on $CPUs CPUs"

savu_mpijob.sh  ./apps/savu/test/24737.nxs  ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0

Break down on each of the above items in the savu run command:

  • savu_mpijob.sh -->this is the main run command
  • /apps/savu/test/24737.nxs --> this is the input file, change it to your input file path
  • /apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
  • /apps/savu/test --> this is the output directory
  • $CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.

once you are happy with the above script just submit it to Slurm using the following command.

[username@login01 ~]$ sbatch savu.job
Submitted batch job 289522

Further Information

Savu user guide

Icon home.png