Difference between revisions of "Applications/Mvapich2"
From HPC
(Created page with "__TOC__ ==Application Details== * Description: The MVAPICH2 software, based on MPI 3.1 standard, delivers performance, scalability and fault tolerance for high-end computin...") |
m (Pysdlb moved page Mvapich2 to Applications/Mvapich2 without leaving a redirect) |
(No difference)
|
Revision as of 09:26, 18 April 2017
Contents
Application Details
- Description: The MVAPICH2 software, based on MPI 3.1 standard, delivers performance, scalability and fault tolerance for high-end computing systems and servers using Omni-Path (Viper) networking technologies.
- Version: 2.2
- Modules: mvapich2/2.2/gcc-5.2.0, mvapich2/2.2/gcc-6.3.0 and mvapich2/2.2/intel-2017
- Licence: Open source
Usage Examples
Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on.
Interactive Mode
This example runs on a reserved node (eg. c001)
[username@c001 ~]$ module load gcc/5.2.0 [username@c001 ~]$ module load mvapich2/2.2/gcc-5.2.0 [username@c001 ~]$ srun -n16 --mpi=pmi2 mvapichDEMO
This can be run with a host file, the file here is called machinefile:
[username@c001 ~]$ module load mvapich2/2.2/gcc-5.2.0
The 'machinefile' is of the form:
c001 c002:2 c003:4 c004:1
- The ':2', ':4', ':1' segments depict the number of processes you want to run on each node.
Non interactive job
This runs on the scheduler SLURM
#!/bin/bash #SBATCH -J MPI-testXX #SBATCH -N 1 #SBATCH --ntasks-per-node 4 #SBATCH -D /home/user1/CODE_SAMPLES/MPICH #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive echo $SLURM_JOB_NODELIST module purge module load gcc/5.2.0 module load mvapich2/2.2/gcc-5.2.0 export I_MPI_DEBUG=5 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no srun -n16 --mpi=pmi2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100
And passing it to SLURM
[username@login01 ~]$ sbatch mpidemo.job Submitted batch job 889552
Further Information
- http://mvapich.cse.ohio-state.edu/
- https://en.wikipedia.org/wiki/MVAPICH
- https://en.wikipedia.org/wiki/Message_Passing_Interface
- https://slurm.schedmd.com/mpi_guide.html