Difference between revisions of "Applications/Nwchem"
From HPC
m |
m |
||
Line 62: | Line 62: | ||
</pre> | </pre> | ||
− | <pre style="background-color: | + | |
+ | <pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | ||
[username@login01 ~]$ sbatch nwchem-x86.job | [username@login01 ~]$ sbatch nwchem-x86.job | ||
Submitted batch job 289522 | Submitted batch job 289522 |
Revision as of 15:22, 3 February 2017
Contents
Application Details
- Description : NWChem provides computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
- Versions : x86/6.6 and cuda/6.6
- Module names : nwchem-x86/6.6 and nwchem-cuda/6.6
- License: Open Source
- Further information: http://www.nwchem-sw.org/ nwchem
Toolboxes
No toolboxes are listed here.
Modules Available
- module load nwchem-x86/6.6
- module load nwchem-cuda/6.6
Usage Examples
Batch Submission
This first example is based on the compute queue:
#!/bin/bash #SBATCH -J nwch-cpu #SBATCH -N 100 #SBATCH --ntasks-per-node=28 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive echo $SLURM_JOB_NODELIST module purge module load intel/mkl/64/11.3.2 module load intel/mpi/64/5.1.3.181 module load intel/compiler/64/2016 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no module load nwchem-x86/6.6 export NWCHEM_ROOT=$HOME/NWCHEM module list mpirun --version # calculating the number of processes NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" echo $SLURM_NTASKS NWCHEM=/trinity/clustervision/CentOS/7/apps/nwchem-6.6/x86/bin/LINUX64/nwchem mpirun -genvall -np $NP $NWCHEM py-c1-vdz.inp > py-c1-vdz-2800_CV.out
[username@login01 ~]$ sbatch nwchem-x86.job Submitted batch job 289522
Secondly, the script is based on the CUDA version of nwchem:
#!/bin/bash #SBATCH -J nwchem-gpu #SBATCH -N 1 #SBATCH --ntasks-per-node 24 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH --exclusive module purge module load nwchem-cuda/6.6 module load cuda/7.5.18 module load intel/mkl/64/11.3.2 module load intel/mpi/64/5.1.3.181 module list nvidia-smi -a mpirun --version # calculating the number of processes NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" NWCHEM=/trinity/clustervision/CentOS/7/apps/nwchem-6.6/cuda/bin/LINUX64/nwchem mpirun -np $NP $NWCHEM py-c1-vdz.inp > py-c1-vdz-24c_0GPU.out
[username@login01 ~]$ sbatch nwchem-cuda.job Submitted batch job 289522