Applications/iqtree

From HPC
Revision as of 13:01, 24 May 2019 by Pysdlb (talk | contribs)

Jump to: navigation , search

Application Details

  • Description: Allows maximum likelihood analysis of large phylogenetic data. IQ-TREE explores the tree space efficiently and often achieves higher likelihoods than RAxML and PhyML. Other key features of IQ-TREE are (i) very fast model selection procedure including partition scheme finding, (ii) partitioned analysis for phylogenomic data, (iii) ultrafast bootstrap approximation, (iv) implementation of several branch tests and (v) tree topology tests. W-IQ-TREE an intuitive and user-friendly web interface and server for IQ-TREE is also available.
  • Version: MPI 1.4.4
  • Licence: Open source

usage

Interactive

While logged into viper login node: you could find the example file used in this path /home/ViperAdmin/software/source/iqtree/iqtree-mpi-1.4.4-Linux/example.phy copy it to the location where you would like to run the below steps

[username@login01 ~]$ interactive
salloc: Granted job allocation 296769
Job ID 296769 connecting to c170, please wait...
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246
[username@c170 ~]$ module add openmpi/gcc/1.10.2
[username@c170 ~]$ mpirun  /trinity/clustervision/CentOS/7/apps/iqtree/1.4.4/openmpi-1.10.2/gcc-4.9.3/bin/iqtree-mpi -s example.phy
 

Batch Session

Below is how to submit the same example using our scheduler slurm. please replace /home/USERNAME/PATH/OF/DIR to the path of the directory where you copied the example.phy file or to the file that you want to use.

#!/bin/bash
#SBATCH -J IQtree_example   # Job name, you can change it to whatever you want
#SBATCH -o %N.%j.out        # Standard output will be written here
#SBATCH -e %N.%j.err        # Standard error will be written here
#SBATCH -n 14               # number of cores
#SBATCH -N 1                # Number of nodes
#SBATCH -p highmem          # Slurm partition, where you want the job to be queued
#SBATCH --exclusive         # Request exclusive access to a node (all 28 cores, 128GB of RAM)

module add openmpi/gcc/1.10.2
cd /home/USERNAME/PATH/OF/DIR
mpirun  /trinity/clustervision/CentOS/7/apps/iqtree/1.4.4/openmpi-1.10.2/gcc-4.9.3/bin/iqtree-mpi -s example.phy


Then the next step is to submit the above script to slurm for example if you name the above text file iqtree.sh


[user@login01 ~]$ sbatch iqtree.sh
Submitted batch job 409671

Further Information


Navigation