Difference between revisions of "Applications/iqtree"

From HPC
Jump to: navigation , search
(usage)
(Batch Session)
Line 42: Line 42:
  
 
Then the next step is to submit the above script to slurm
 
Then the next step is to submit the above script to slurm
for example if you name the above text file iqtree.sh  
+
for example if you name the above text file raxml.sh  
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
  
[user@login01 ~]$ sbatch iqtree.sh
+
[user@login01 ~]$ sbatch raxml.sh
 
Submitted batch job 409671
 
Submitted batch job 409671
  

Revision as of 17:00, 16 November 2017

Application Details

  • A tool for Phylogenetic Analysis and Post-Analysis of Large Phylogenies.
  • Version: MPI AVX 8.2.11
  • Licence: Open source

usage

Interactive

While logged into viper login node: you could find the example file used in this path /home/ViperAdmin/software/source/iqtree/iqtree-mpi-1.4.4-Linux/example.phy copy it to the location where you would like to run the below steps

[username@login01 ~]$ interactive
salloc: Granted job allocation 296769
Job ID 296769 connecting to c170, please wait...
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246
[username@c170 ~]$ module add openmpi/gcc/1.10.2
[username@c170 ~]$ /trinity/clustervision/CentOS/7/apps/raxml/8.2.11/openmpi-1.10.2/gcc-4.9.3/bin/raxmlHPC-MPI-AVX -N 10 -m GTRCAT -s myalign.phy -n myalign.phy pause 

Batch Session

Below is how to submit the same example using our scheduler slurm. please replace /home/USERNAME/PATH/OF/DIR to the path of the directory where you copied the example.phy file or to the file that you want to use.

#!/bin/bash
#SBATCH -J IQtree_example   # Job name, you can change it to whatever you want
#SBATCH -o %N.%j.out        # Standard output will be written here
#SBATCH -e %N.%j.err        # Standard error will be written here
#SBATCH -n 14               # number of cores
#SBATCH -N 1                # Number of nodes
#SBATCH -p highmem          # Slurm partition, where you want the job to be queued
#SBATCH --exclusive         # Request exclusive access to a node (all 28 cores, 128GB of RAM)

module load openmpi/gcc/1.10.2
cd /home/USERNAME/PATH/OF/DIR
/trinity/clustervision/CentOS/7/apps/raxml/8.2.11/openmpi-1.10.2/gcc-4.9.3/bin/raxmlHPC-MPI-AVX -N 10 -m GTRCAT -s myalign.phy -n myalign.phy pause

Then the next step is to submit the above script to slurm for example if you name the above text file raxml.sh


[user@login01 ~]$ sbatch raxml.sh
Submitted batch job 409671

Further Information


Icon home.png