Difference between revisions of "Applications/Savu"

From HPC
Jump to: navigation , search
(3.3.0)
m (Further Information)
 
(30 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
==Application Details==
 
==Application Details==
  
* Description: R is an open source programming language and software environment for statistical computing and graphics
+
* Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data.
* Version: 2.0
+
* Version: 2.3
* Modules: savu/2.0/gcc-5.2.0/openmpi-2.0.2
+
* Modules: savu/2.3/gcc-5.2.0/openmpi-2.0.2 and savu/2.3.1/gcc-5.2.0/openmpi-2.0.2
 
==Usage Examples==
 
==Usage Examples==
 
  
 
===Interactive===
 
===Interactive===
 +
This mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.
  
 
'''Note''': savu supports interactive mode (as below) and execution mode running as an interpreted script.
 
'''Note''': savu supports interactive mode (as below) and execution mode running as an interpreted script.
Line 17: Line 17:
 
Job ID 402968 connecting to c059, please wait...
 
Job ID 402968 connecting to c059, please wait...
 
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
 
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
[username@c059 ~]$ module load savu/2.0/gcc-5.2.0/openmpi-2.0.2
+
[username@c059 ~]$ module add savu/2.3/gcc-5.2.0/openmpi-2.0.2
 
[username@c059~]$ savu_config
 
[username@c059~]$ savu_config
  
Line 27: Line 27:
  
 
</pre>
 
</pre>
Example:
 
this will run basic_tomo_process.nxs on file 24737.nxs and place the output in your current directory under results
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
  
[username@c059~]$  savu /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/data/24737.nxs /trinity/clustervision/CentOS/7/apps/savu/2.0/gcc-5.2.0/openmpi-2.0.2/Savu_2.0/miniconda/pkgs/savu-2.0-h4571989_0/lib/python2.7/site-packages/savu-2.0-py2.7.egg/test_data/test_process_lists/basic_tomo_process.nxs ./results
 
  
</pre>
+
=== Batch Submission ===
  
===3.4.1===
+
Once you are happy with the results you got  you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.
  
This version has compiled support for these additional libraries:
+
Note: the current installation does support GPU via CUDA (version 8) libraries
  
* --enable-R-shlib
+
<pre style="background-color: #C8C8C8; color: black; font-family: monospace, sans-serif;">
* --with-x --with-readline
+
#!/bin/bash
* --with-cairo
+
#SBATCH -J My_Savu_job    # Job name, you can change it to whatever you want
* --with-tcltk
+
#SBATCH -N 2              # Number of nodes
* --with-libpng
+
#SBATCH -n 28              # Number of cores
* --with-jpeglib
+
#SBATCH -o %N.%j.out      # Standard output will be written here
 +
#SBATCH -e %N.%j.err      # Standard error will be written here
 +
#SBATCH -p compute        # Slurm partition, where you want the job to be queued
 +
#SBATCH --exclusive        # run on one node without any other users
  
<strong>Install an R package</strong>
+
 +
module purge
 +
module add savu/2.3/gcc-5.2.0/openmpi-2.0.2
  
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
+
#This line will calculate the total number of CPUs and place it in CPUs variable
 +
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
 +
echo " will run on $CPUs CPUs"
  
 +
savu_mpijob.sh ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0
 +
</pre>
 +
 +
Savu MPI run command:
  
[username@login01 ~]$ module load R/3.4.1
+
*savu_mpijob.sh  -->this is the main run command
[username@login01 ~]$ R
+
*/apps/savu/test/24737.nxs  --> this is the input file, change it to your input file path
 +
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
 +
*/apps/savu/test --> this is the output directory
 +
*$CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
  
R version 3.4.1 (2017-06-30) -- "Single Candle"
+
once you are happy with the above script just submit it to Slurm using the following command.
Copyright (C) 2017 The R Foundation for Statistical Computing
+
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
Platform: x86_64-pc-linux-gnu (64-bit)
+
[username@login01 ~]$ sbatch savu.job
 
+
Submitted batch job 589522
R is free software and comes with ABSOLUTELY NO WARRANTY.
+
</pre>
You are welcome to redistribute it under certain conditions.
 
Type 'license()' or 'licence()' for distribution details.
 
 
 
  Natural language support but running in an English locale
 
 
 
R is a collaborative project with many contributors.
 
Type 'contributors()' for more information and
 
'citation()' on how to cite R or R packages in publications.
 
 
 
Type 'demo()' for some demos, 'help()' for on-line help, or
 
'help.start()' for an HTML browser interface to help.
 
Type 'q()' to quit R.
 
  
> install.packages("ggplot2")
+
==Further Information==
  
</pre>
+
[http://savu.readthedocs.io/ Savu guide]
 
 
=== Batch Submission ===
 
<pre style="background-color: #C8C8C8; color: black; font-family: monospace, sans-serif;">
 
#!/bin/bash
 
#SBATCH -J My_R_job              # Job name, you can change it to whatever you want
 
#SBATCH -N 1                # Number of nodes
 
#SBATCH -o %N.%j.out        # Standard output will be written here
 
#SBATCH -e %N.%j.err        # Standard error will be written here
 
#SBATCH -p compute          # Slurm partition, where you want the job to be queued
 
#SBATCH --exclusive            # run on one node without any other users
 
#SBATCH --mem=64G      # reserve 64Gbytes of RAM for my job (optional)
 
 
module purge
 
module add R/3.4.1
 
 
R CMD BATCH Random.R
 
</pre>
 
  
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
+
{{Modulepagenav}}
[username@login01 ~]$ sbatch Rtest.job
 
Submitted batch job 289522
 
</pre>
 

Latest revision as of 09:51, 23 March 2023

Application Details

  • Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data.
  • Version: 2.3
  • Modules: savu/2.3/gcc-5.2.0/openmpi-2.0.2 and savu/2.3.1/gcc-5.2.0/openmpi-2.0.2

Usage Examples

Interactive

This mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.

Note: savu supports interactive mode (as below) and execution mode running as an interpreted script.


[username@login01 ~]$ interactive
salloc: Granted job allocation 402968
Job ID 402968 connecting to c059, please wait...
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246
[username@c059 ~]$ module add savu/2.3/gcc-5.2.0/openmpi-2.0.2
[username@c059~]$ savu_config

or


[username@c059~]$  savu /Path/To_input/Data/file.nxs /List/Of_processes/To_be_performed/process.nxs /path /to/output/directory


Batch Submission

Once you are happy with the results you got you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.

Note: the current installation does support GPU via CUDA (version 8) libraries

#!/bin/bash
#SBATCH -J My_Savu_job     # Job name, you can change it to whatever you want
#SBATCH -N 2               # Number of nodes 
#SBATCH -n 28              # Number of cores
#SBATCH -o %N.%j.out       # Standard output will be written here
#SBATCH -e %N.%j.err       # Standard error will be written here
#SBATCH -p compute         # Slurm partition, where you want the job to be queued 
#SBATCH --exclusive        # run on one node without any other users

 
module purge
module add savu/2.3/gcc-5.2.0/openmpi-2.0.2

#This line will calculate the total number of CPUs and place it in CPUs variable
CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE))
echo " will run on $CPUs CPUs"

savu_mpijob.sh ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0

Savu MPI run command:

  • savu_mpijob.sh -->this is the main run command
  • /apps/savu/test/24737.nxs --> this is the input file, change it to your input file path
  • /apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
  • /apps/savu/test --> this is the output directory
  • $CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.

once you are happy with the above script just submit it to Slurm using the following command.

[username@login01 ~]$ sbatch savu.job
Submitted batch job 589522

Further Information

Savu guide





Modules | Main Page | Further Topics