Difference between revisions of "Applications/CP2K"
From HPC
(→OpenMPI Submission Script) |
m (→Further information) |
||
(One intermediate revision by the same user not shown) | |||
Line 134: | Line 134: | ||
* [https://www.cp2k.org/ https://www.cp2k.org/] | * [https://www.cp2k.org/ https://www.cp2k.org/] | ||
− | + | {{Modulepagenav}} | |
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Latest revision as of 10:40, 16 November 2022
Contents
Application Details
- Description : CP2K is a quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid-state, liquid, molecular, periodic, material, crystal, and biological systems.
- Versions : 3.0
- Module names : cp2k/3.0 ; cp2k/6.1.0/gcc-7.3.0/intelmpi-2018 ; cp2k/6.1.0/gcc-7.3.0/openmpi-3.0.0 ; cp2k/9.1.0/gcc-8.5.0/openmpi-4.1.1
- License: Freely available under the GPL license
Modules Available
- module add cp2k/3.0
- module add cp2k/6.1.0/gcc-7.3.0/intelmpi-2018
- module add cp2k/6.1.0/gcc-7.3.0/openmpi-3.0.0
- module add cp2k/9.1.0/gcc-8.5.0/openmpi-4.1.1
Usage Examples
Batch Submission
OpenMPI Submission Script
The following is a sample submission script for an OpenMPI CP2K task running across 3 nodes with the input file cp2ktask.inp (change this to reflect your particular task):
#!/bin/bash #SBATCH -J cp2k_openmpi #SBATCH -N 3 #SBATCH --ntasks-per-node 27 #SBATCH -o cp2k-%j.out #SBATCH -e cp2k-%j.err #SBATCH -p compute #SBATCH --exclusive module purge module add cp2k/9.1.0/gcc-8.5.0/openmpi-4.1.1 NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" INPUTFILE=cp2ktask.inp OUTPUTFILE=cp2ktask.out export OMP_NUM_THREADS=1 mpirun -np $NP cp2k.psmp $INPUTFILE > $SLURM_SUBMIT_DIR/$OUTPUTFILE
Compute Queue
#!/bin/bash #SBATCH -J cp2k-cpu #SBATCH -N 120 #SBATCH --ntasks-per-node 14 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive #SBATCH --mail-user= your email address here echo $SLURM_JOB_NODELIST module purge module add intel/mkl/64/11.3.2 module add intel/mpi/64/5.1.3.181 module add intel/compiler/64/2016 module add cp2k/3.0 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no module list mpirun --version # calculating the number of processes NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" CP2K=/home/user/cp2k/cp2k/exe/Linux-x86-64-intel-host/cp2k.psmp export OMP_NUM_THREADS=2 mpirun -genvall -np $NP env PSM_TRACEMASK=0x101 $CP2K H2O-64-RI-MP2-TZ.inp > H2O-64-RI-MP2-TZ-omp2.out
[username@login01 ~]$ sbatch cp2k-test.job Submitted batch job 189522
GPU Queue
#!/bin/bash #SBATCH -J cp2k-gpu #SBATCH -N 1 #SBATCH --ntasks-per-node 24 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH --exclusive module purge module add cp2k/3.0 module add cuda/7.5.18 module add intel/mkl/64/11.3.2 module add intel/mpi/64/5.1.3.181 module list nvidia-smi -a mpirun --version # calculating the number of processes NP=$(( $SLURM_JOB_NUM_NODES * $SLURM_NTASKS_PER_NODE )) echo $NP "processes" CP2K=/trinity/clustervision/CentOS/7/apps/cp2k/build-v1intel-cp2k-20151010-120408/cp2k/exe/Linux-x86-64-cuda/cp2k.sopt $CP2K H2O-64.inp > H2O-64.out
[username@login01 ~]$ sbatch cp2k-test-gpu.job Submitted batch job 189523