Difference between revisions of "Applications/Nwchem"

From HPC
Jump to: navigation , search
m
m
Line 14: Line 14:
 
== Modules Available ==
 
== Modules Available ==
  
* module load nwchem-x86/6.6
+
* module add nwchem-x86/6.6
* module load nwchem-cuda/6.6
+
* module add nwchem-cuda/6.6
  
 
== Usage Examples ==
 
== Usage Examples ==
Line 38: Line 38:
  
 
module purge
 
module purge
module load intel/mkl/64/11.3.2
+
module add intel/mkl/64/11.3.2
module load intel/mpi/64/5.1.3.181
+
module add intel/mpi/64/5.1.3.181
module load intel/compiler/64/2016
+
module add intel/compiler/64/2016
  
 
export I_MPI_FABRICS=shm:tmi
 
export I_MPI_FABRICS=shm:tmi
 
export I_MPI_FALLBACK=no
 
export I_MPI_FALLBACK=no
  
module load nwchem-x86/6.6
+
module add nwchem-x86/6.6
 
export NWCHEM_ROOT=$HOME/NWCHEM
 
export NWCHEM_ROOT=$HOME/NWCHEM
  
Line 85: Line 85:
 
module purge
 
module purge
  
module load nwchem-cuda/6.6
+
module add nwchem-cuda/6.6
module load cuda/7.5.18
+
module add cuda/7.5.18
module load intel/mkl/64/11.3.2
+
module add intel/mkl/64/11.3.2
module load intel/mpi/64/5.1.3.181
+
module add intel/mpi/64/5.1.3.181
  
 
module list
 
module list

Revision as of 12:24, 8 February 2017

Application Details

  • Description : NWChem provides computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
  • Versions : x86/6.6 and cuda/6.6
  • Module names : nwchem-x86/6.6 and nwchem-cuda/6.6
  • License: Open Source


Toolboxes

No toolboxes are listed here.

Modules Available

  • module add nwchem-x86/6.6
  • module add nwchem-cuda/6.6

Usage Examples

Batch Submission

This first example is based on the compute queue:



#!/bin/bash
#SBATCH -J nwch-cpu
#SBATCH -N 100
#SBATCH --ntasks-per-node=28
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add intel/mkl/64/11.3.2
module add intel/mpi/64/5.1.3.181
module add intel/compiler/64/2016

export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

module add nwchem-x86/6.6
export NWCHEM_ROOT=$HOME/NWCHEM

module list
mpirun --version

# calculating the number of processes
NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE ))
echo $NP "processes"
echo $SLURM_NTASKS

NWCHEM=/trinity/clustervision/CentOS/7/apps/nwchem-6.6/x86/bin/LINUX64/nwchem

mpirun -genvall  -np $NP $NWCHEM py-c1-vdz.inp > py-c1-vdz-2800_CV.out


[username@login01 ~]$ sbatch nwchem-x86.job
Submitted batch job 289522

Secondly, the script is based on the CUDA version of nwchem:



#!/bin/bash
#SBATCH -J nwchem-gpu
#SBATCH -N 1
#SBATCH --ntasks-per-node 24
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p gpu
#SBATCH --gres=gpu:1
#SBATCH --exclusive

module purge

module add nwchem-cuda/6.6
module add cuda/7.5.18
module add intel/mkl/64/11.3.2
module add intel/mpi/64/5.1.3.181

module list
nvidia-smi -a

mpirun --version

# calculating the number of processes
NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE ))
echo $NP "processes"

NWCHEM=/trinity/clustervision/CentOS/7/apps/nwchem-6.6/cuda/bin/LINUX64/nwchem

mpirun -np $NP $NWCHEM py-c1-vdz.inp > py-c1-vdz-24c_0GPU.out

[username@login01 ~]$ sbatch nwchem-cuda.job
Submitted batch job 289522

Further Information