Difference between revisions of "Applications/Savu"

From HPC
Jump to: navigation , search
(Batch Submission)
m (Further Information)
 
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
==Application Details==
 
==Application Details==
  
* Description: R is an open source programming language and software environment for statistical computing and graphics
+
* Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data.
* Version: 2.0
+
* Version: 2.3
* Modules: savu/2.0/gcc-5.2.0/openmpi-2.0.2
+
* Modules: savu/2.3/gcc-5.2.0/openmpi-2.0.2 and savu/2.3.1/gcc-5.2.0/openmpi-2.0.2
 
==Usage Examples==
 
==Usage Examples==
 
  
 
===Interactive===
 
===Interactive===
this mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.
+
This mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.
  
 
'''Note''': savu supports interactive mode (as below) and execution mode running as an interpreted script.
 
'''Note''': savu supports interactive mode (as below) and execution mode running as an interpreted script.
Line 18: Line 17:
 
Job ID 402968 connecting to c059, please wait...
 
Job ID 402968 connecting to c059, please wait...
 
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
 
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
[username@c059 ~]$ module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
+
[username@c059 ~]$ module add savu/2.3/gcc-5.2.0/openmpi-2.0.2
 
[username@c059~]$ savu_config
 
[username@c059~]$ savu_config
  
Line 28: Line 27:
  
 
</pre>
 
</pre>
Example:
 
this will run basic_tomo_process.nxs on file 24737.nxs and place the output in your current directory under results
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
  
[username@c059~]$  savu /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/data/24737.nxs /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/test_process_lists/basic_tomo_process.nxs ./results
 
  
</pre>
+
=== Batch Submission ===
 +
 
 +
Once you are happy with the results you got  you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.
  
=== Batch Submission ===
+
Note: the current installation does support GPU via CUDA (version 8) libraries
Once you are happy with the results you got. you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.
 
  
 
<pre style="background-color: #C8C8C8; color: black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #C8C8C8; color: black; font-family: monospace, sans-serif;">
 
#!/bin/bash
 
#!/bin/bash
#SBATCH -J My_R_job        # Job name, you can change it to whatever you want
+
#SBATCH -J My_Savu_job    # Job name, you can change it to whatever you want
#SBATCH -N 1               # Number of nodes  
+
#SBATCH -N 2               # Number of nodes  
 
#SBATCH -n 28              # Number of cores
 
#SBATCH -n 28              # Number of cores
 
#SBATCH -o %N.%j.out      # Standard output will be written here
 
#SBATCH -o %N.%j.out      # Standard output will be written here
Line 51: Line 47:
 
   
 
   
 
module purge
 
module purge
module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
+
module add savu/2.3/gcc-5.2.0/openmpi-2.0.2
 +
 
 +
#This line will calculate the total number of CPUs and place it in CPUs variable
 +
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
 +
echo " will run on $CPUs CPUs"
  
savu_mpijob.sh /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test 28 0
+
savu_mpijob.sh ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0
 
</pre>
 
</pre>
Break down on each of the above items in the savu run command:
+
 +
Savu MPI run command:
  
 
*savu_mpijob.sh  -->this is the main run command
 
*savu_mpijob.sh  -->this is the main run command
*/trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg --> this is the path of Savu installtion, just leave it as it is
 
 
*/apps/savu/test/24737.nxs  --> this is the input file, change it to your input file path
 
*/apps/savu/test/24737.nxs  --> this is the input file, change it to your input file path
 
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
 
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
 
*/apps/savu/test --> this is the output directory
 
*/apps/savu/test --> this is the output directory
*28 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
+
*$CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
 +
 
 +
once you are happy with the above script just submit it to Slurm using the following command.
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
[username@login01 ~]$ sbatch savu.job
 
[username@login01 ~]$ sbatch savu.job
Submitted batch job 289522
+
Submitted batch job 589522
 
</pre>
 
</pre>
 +
 +
==Further Information==
 +
 +
[http://savu.readthedocs.io/ Savu guide]
 +
 +
{{Modulepagenav}}

Latest revision as of 09:51, 23 March 2023

Application Details

  • Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data.
  • Version: 2.3
  • Modules: savu/2.3/gcc-5.2.0/openmpi-2.0.2 and savu/2.3.1/gcc-5.2.0/openmpi-2.0.2

Usage Examples

Interactive

This mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.

Note: savu supports interactive mode (as below) and execution mode running as an interpreted script.


[username@login01 ~]$ interactive
salloc: Granted job allocation 402968
Job ID 402968 connecting to c059, please wait...
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
[username@c059 ~]$ module add savu/2.3/gcc-5.2.0/openmpi-2.0.2
[username@c059~]$ savu_config

or


[username@c059~]$  savu /Path/To_input/Data/file.nxs /List/Of_processes/To_be_performed/process.nxs /path /to/output/directory


Batch Submission

Once you are happy with the results you got you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.

Note: the current installation does support GPU via CUDA (version 8) libraries

#!/bin/bash
#SBATCH -J My_Savu_job     # Job name, you can change it to whatever you want
#SBATCH -N 2               # Number of nodes 
#SBATCH -n 28              # Number of cores
#SBATCH -o %N.%j.out       # Standard output will be written here
#SBATCH -e %N.%j.err       # Standard error will be written here
#SBATCH -p compute         # Slurm partition, where you want the job to be queued 
#SBATCH --exclusive        # run on one node without any other users

 
module purge
module add savu/2.3/gcc-5.2.0/openmpi-2.0.2

#This line will calculate the total number of CPUs and place it in CPUs variable
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
echo " will run on $CPUs CPUs"

savu_mpijob.sh ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0

Savu MPI run command:

  • savu_mpijob.sh -->this is the main run command
  • /apps/savu/test/24737.nxs --> this is the input file, change it to your input file path
  • /apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
  • /apps/savu/test --> this is the output directory
  • $CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.

once you are happy with the above script just submit it to Slurm using the following command.

[username@login01 ~]$ sbatch savu.job
Submitted batch job 589522

Further Information

Savu guide





Modules | Main Page | Further Topics