Difference between revisions of "Applications/Nwchem"
From HPC
m |
m (→Navigation) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
* License: Open Source | * License: Open Source | ||
− | |||
− | |||
− | |||
− | |||
== Modules Available == | == Modules Available == | ||
− | * module | + | * module add nwchem-x86/6.6 |
− | * module | + | * module add nwchem-cuda/6.6 |
== Usage Examples == | == Usage Examples == | ||
Line 23: | Line 19: | ||
This first example is based on the compute queue: | This first example is based on the compute queue: | ||
− | <pre style="background-color: #C8C8C8; color: black; border: 2px solid | + | |
+ | <pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;"> | ||
#!/bin/bash | #!/bin/bash | ||
Line 33: | Line 30: | ||
#SBATCH -p compute | #SBATCH -p compute | ||
#SBATCH --exclusive | #SBATCH --exclusive | ||
+ | #SBATCH --mail-user= your email address here | ||
echo $SLURM_JOB_NODELIST | echo $SLURM_JOB_NODELIST | ||
module purge | module purge | ||
− | module | + | module add intel/mkl/64/11.3.2 |
− | module | + | module add intel/mpi/64/5.1.3.181 |
− | module | + | module add intel/compiler/64/2016 |
export I_MPI_FABRICS=shm:tmi | export I_MPI_FABRICS=shm:tmi | ||
export I_MPI_FALLBACK=no | export I_MPI_FALLBACK=no | ||
− | module | + | module add nwchem-x86/6.6 |
export NWCHEM_ROOT=$HOME/NWCHEM | export NWCHEM_ROOT=$HOME/NWCHEM | ||
Line 67: | Line 65: | ||
</pre> | </pre> | ||
− | Secondly, the script is based on the CUDA version of nwchem: | + | Secondly, the script is based on the [[programming/Cuda|CUDA]] version of nwchem: |
− | <pre style="background-color: #C8C8C8; color: black; border: 2px solid | + | <pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;"> |
#!/bin/bash | #!/bin/bash | ||
Line 80: | Line 78: | ||
#SBATCH --gres=gpu:1 | #SBATCH --gres=gpu:1 | ||
#SBATCH --exclusive | #SBATCH --exclusive | ||
+ | #SBATCH --mail-user= your email address here | ||
module purge | module purge | ||
− | module | + | module add nwchem-cuda/6.6 |
− | module | + | module add cuda/7.5.18 |
− | module | + | module add intel/mkl/64/11.3.2 |
− | module | + | module add intel/mpi/64/5.1.3.181 |
module list | module list | ||
Line 105: | Line 104: | ||
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | <pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | ||
[username@login01 ~]$ sbatch nwchem-cuda.job | [username@login01 ~]$ sbatch nwchem-cuda.job | ||
− | Submitted batch job | + | Submitted batch job 1289522 |
</pre> | </pre> | ||
Line 112: | Line 111: | ||
* [http://www.nwchem-sw.org/ http://www.nwchem-sw.org/] | * [http://www.nwchem-sw.org/ http://www.nwchem-sw.org/] | ||
− | + | {{Modulepagenav}} | |
− |
Latest revision as of 10:53, 16 November 2022
Contents
Application Details
- Description : NWChem provides computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
- Versions : x86/6.6 and cuda/6.6
- Module names : nwchem-x86/6.6 and nwchem-cuda/6.6
- License: Open Source
Modules Available
- module add nwchem-x86/6.6
- module add nwchem-cuda/6.6
Usage Examples
Batch Submission
This first example is based on the compute queue:
#!/bin/bash #SBATCH -J nwch-cpu #SBATCH -N 100 #SBATCH --ntasks-per-node=28 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive #SBATCH --mail-user= your email address here echo $SLURM_JOB_NODELIST module purge module add intel/mkl/64/11.3.2 module add intel/mpi/64/5.1.3.181 module add intel/compiler/64/2016 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no module add nwchem-x86/6.6 export NWCHEM_ROOT=$HOME/NWCHEM module list mpirun --version # calculating the number of processes NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" echo $SLURM_NTASKS NWCHEM=/trinity/clustervision/CentOS/7/apps/nwchem-6.6/x86/bin/LINUX64/nwchem mpirun -genvall -np $NP $NWCHEM py-c1-vdz.inp > py-c1-vdz-2800_CV.out
[username@login01 ~]$ sbatch nwchem-x86.job Submitted batch job 289522
Secondly, the script is based on the CUDA version of nwchem:
#!/bin/bash #SBATCH -J nwchem-gpu #SBATCH -N 1 #SBATCH --ntasks-per-node 24 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH --exclusive #SBATCH --mail-user= your email address here module purge module add nwchem-cuda/6.6 module add cuda/7.5.18 module add intel/mkl/64/11.3.2 module add intel/mpi/64/5.1.3.181 module list nvidia-smi -a mpirun --version # calculating the number of processes NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" NWCHEM=/trinity/clustervision/CentOS/7/apps/nwchem-6.6/cuda/bin/LINUX64/nwchem mpirun -np $NP $NWCHEM py-c1-vdz.inp > py-c1-vdz-24c_0GPU.out
[username@login01 ~]$ sbatch nwchem-cuda.job Submitted batch job 1289522