Difference between revisions of "Applications/Beast"

From HPC
Jump to: navigation , search
m
m (Non Interactive)
 
(One intermediate revision by the same user not shown)
Line 28: Line 28:
 
===Non Interactive===
 
===Non Interactive===
  
Using a [[General/Slurm|SLURM script]] to run an array based processing beast processing script across the compute nodes:
+
Using a [[FurtherTopics/Advanced_Batch_Jobs#Array_batch_job| SLURM script]] to run an array based processing beast processing script across the compute nodes:
  
 
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
Line 55: Line 55:
  
  
* Then submitting this to [[General/Slurm|SLURM]].  
+
* Then submitting this to [[Quickstart/Slurm|SLURM]].  
  
  
Line 62: Line 62:
 
Submitted job 345663
 
Submitted job 345663
 
</pre>
 
</pre>
 
 
  
 
==Further Information==
 
==Further Information==

Latest revision as of 10:03, 22 November 2022

Application Details

  • Description: BEAST is a cross-platform program for Bayesian analysis of molecular sequences using MCMC.
  • Version: 1.8.3
  • Modules: beast/1.8.3
  • Licence: Free, open-source

Usage Examples

BEAST is a cross-platform program for Bayesian analysis of molecular sequences using MCMC. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology. BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. We include a simple to use user-interface program for setting up standard analyses and a suit of programs for analysing the results.

It can make use of highly-parallel processors such as those in 3D graphics boards (referred to as Graphics Processing Units or GPUs) found in many PCs. In general using it (even if not using a GPU) will improve the performance of BEAST. However, installing it for BEAST to use is not a simple operation presently (but we hope to fix this shortly) and it is not necessarily going to benefit all data sets. In particular, for the use of a GPU (and currently only NVidia ones are supported) to be efficient, long partitions are required (perhaps >500 unique site patterns).

Interactive

While logged into a reserved high memory node (c230):

[username@c230 ~]$ module add beast/1.8.3
[username@c230 ~]$ module add libbeagle/2.1.2 
[username@c230 ~]$ module add java/jdk1.8.0_102
[username@c230 ~]$ beast Random_Cratopus_BEAST.xml


Non Interactive

Using a SLURM script to run an array based processing beast processing script across the compute nodes:


#!/bin/bash
#SBATCH -J Random_BEAST
#SBATCH -N 1
#SBATCH -o %A-%a.log
#SBATCH -e %A-%a.err
#SBATCH -p compute

module add beast/1.8.3
module add libbeagle/2.1.2 
module add java/jdk1.8.0_102


for i in ${SLURM_ARRAY_TASK_ID}
do
        cp Random_Cratopus_BEAST.xml Random_Cratopus_BEAST_${SLURM_ARRAY_TASK_ID}.xml
        sed -i -e "s/Random_Cratopus_BEAST/Random_Cratopus_BEAST_${SLURM_ARRAY_TASK_ID}/g" Random_Cratopus_BEAST_${SLURM_ARRAY_TASK_ID}.xml
done

beast "Random_Cratopus_BEAST_${SLURM_ARRAY_TASK_ID}".xml


  • Then submitting this to SLURM.


[username@login01 ~]$ sbatch BeastDEMO.job
Submitted job 345663

Further Information

Navigation