Difference between revisions of "Applications/Savu"
From HPC
(→Interactive) |
m (→Further Information) |
||
(12 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
==Application Details== | ==Application Details== | ||
− | * Description: | + | * Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data. |
− | * Version: 2. | + | * Version: 2.3 |
− | * Modules: savu/2.0/gcc-5.2.0/openmpi-2.0.2 | + | * Modules: savu/2.3/gcc-5.2.0/openmpi-2.0.2 and savu/2.3.1/gcc-5.2.0/openmpi-2.0.2 |
==Usage Examples== | ==Usage Examples== | ||
− | |||
===Interactive=== | ===Interactive=== | ||
− | + | This mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour. | |
'''Note''': savu supports interactive mode (as below) and execution mode running as an interpreted script. | '''Note''': savu supports interactive mode (as below) and execution mode running as an interpreted script. | ||
Line 18: | Line 17: | ||
Job ID 402968 connecting to c059, please wait... | Job ID 402968 connecting to c059, please wait... | ||
Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246 | Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246 | ||
− | [username@c059 ~]$ module | + | [username@c059 ~]$ module add savu/2.3/gcc-5.2.0/openmpi-2.0.2 |
[username@c059~]$ savu_config | [username@c059~]$ savu_config | ||
Line 29: | Line 28: | ||
</pre> | </pre> | ||
− | |||
− | |||
− | |||
− | |||
− | + | === Batch Submission === | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Once you are happy with the results you got you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes. | |
− | Once you are happy with the results you got | ||
− | Note: the | + | Note: the current installation does support GPU via CUDA (version 8) libraries |
<pre style="background-color: #C8C8C8; color: black; font-family: monospace, sans-serif;"> | <pre style="background-color: #C8C8C8; color: black; font-family: monospace, sans-serif;"> | ||
#!/bin/bash | #!/bin/bash | ||
− | #SBATCH -J | + | #SBATCH -J My_Savu_job # Job name, you can change it to whatever you want |
− | #SBATCH -N | + | #SBATCH -N 2 # Number of nodes |
#SBATCH -n 28 # Number of cores | #SBATCH -n 28 # Number of cores | ||
#SBATCH -o %N.%j.out # Standard output will be written here | #SBATCH -o %N.%j.out # Standard output will be written here | ||
Line 58: | Line 47: | ||
module purge | module purge | ||
− | module | + | module add savu/2.3/gcc-5.2.0/openmpi-2.0.2 |
#This line will calculate the total number of CPUs and place it in CPUs variable | #This line will calculate the total number of CPUs and place it in CPUs variable | ||
Line 64: | Line 53: | ||
echo " will run on $CPUs CPUs" | echo " will run on $CPUs CPUs" | ||
− | savu_mpijob.sh | + | savu_mpijob.sh ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0 |
</pre> | </pre> | ||
− | + | ||
+ | Savu MPI run command: | ||
*savu_mpijob.sh -->this is the main run command | *savu_mpijob.sh -->this is the main run command | ||
− | |||
*/apps/savu/test/24737.nxs --> this is the input file, change it to your input file path | */apps/savu/test/24737.nxs --> this is the input file, change it to your input file path | ||
*/apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path | */apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path | ||
Line 78: | Line 67: | ||
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | <pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | ||
[username@login01 ~]$ sbatch savu.job | [username@login01 ~]$ sbatch savu.job | ||
− | Submitted batch job | + | Submitted batch job 589522 |
</pre> | </pre> | ||
==Further Information== | ==Further Information== | ||
− | [http://savu.readthedocs.io/ | + | [http://savu.readthedocs.io/ Savu guide] |
− | { | + | {{Modulepagenav}} |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Latest revision as of 09:51, 23 March 2023
Contents
Application Details
- Description: Python package to assist with the processing and reconstruction of parallel-beam tomography data.
- Version: 2.3
- Modules: savu/2.3/gcc-5.2.0/openmpi-2.0.2 and savu/2.3.1/gcc-5.2.0/openmpi-2.0.2
Usage Examples
Interactive
This mode is usually used on a smaller slice of your data. to define and make sure what kind of processing you want to run and start tuning it till it shows the expected behaviour.
Note: savu supports interactive mode (as below) and execution mode running as an interpreted script.
[username@login01 ~]$ interactive salloc: Granted job allocation 402968 Job ID 402968 connecting to c059, please wait... Last login: Wed Jul 12 16:23:48 2017 from 10.254.5.246 [username@c059 ~]$ module add savu/2.3/gcc-5.2.0/openmpi-2.0.2 [username@c059~]$ savu_config
or
[username@c059~]$ savu /Path/To_input/Data/file.nxs /List/Of_processes/To_be_performed/process.nxs /path /to/output/directory
Batch Submission
Once you are happy with the results you got you can start submitting a bigger jobs to the cluster. Below is an example of running savu_mpi to process data on multiple nodes.
Note: the current installation does support GPU via CUDA (version 8) libraries
#!/bin/bash #SBATCH -J My_Savu_job # Job name, you can change it to whatever you want #SBATCH -N 2 # Number of nodes #SBATCH -n 28 # Number of cores #SBATCH -o %N.%j.out # Standard output will be written here #SBATCH -e %N.%j.err # Standard error will be written here #SBATCH -p compute # Slurm partition, where you want the job to be queued #SBATCH --exclusive # run on one node without any other users module purge module add savu/2.3/gcc-5.2.0/openmpi-2.0.2 #This line will calculate the total number of CPUs and place it in CPUs variable CPUs=$(($SLURM_NNODES * $SLURM_CPUS_ON_NODE)) echo " will run on $CPUs CPUs" savu_mpijob.sh ./apps/savu/test/24737.nxs ./apps/savu/test/basic_tomo_process.nxs ./apps/savu/test $CPUs 0
Savu MPI run command:
- savu_mpijob.sh -->this is the main run command
- /apps/savu/test/24737.nxs --> this is the input file, change it to your input file path
- /apps/savu/test/basic_tomo_process.nxs --> this is the list of processes to be performed, change it to your process list file path
- /apps/savu/test --> this is the output directory
- $CPUs 0 --> thats the number of CPUs and GPUs. 28 and zero should run properly on the compute partition.
once you are happy with the above script just submit it to Slurm using the following command.
[username@login01 ~]$ sbatch savu.job Submitted batch job 589522