From HPC
Jump to: navigation , search

Application Details

  • Description: The MVAPICH2 software, based on MPI 3.1 standard, delivers performance, scalability and fault tolerance for high-end computing systems and servers using Omni-Path (Viper) networking technologies.
  • Version: 2.2
  • Modules: mvapich2/2.2/gcc-5.2.0, mvapich2/2.2/gcc-6.3.0 and mvapich2/2.2/intel-2017
  • Licence: Open source

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on.

Interactive Mode

This example runs on a reserved node (eg. c001)

[username@c001 ~]$ module add gcc/5.2.0
[username@c001 ~]$ module add mvapich2/2.2/gcc-5.2.0
[username@c001 ~]$ srun -n16 --mpi=pmi2 mvapichDEMO

This can be run with a host file, the file here is called machinefile:

[username@c001 ~]$ module add mvapich2/2.2/gcc-5.2.0

The 'machinefile' is of the form:

  • The ':2', ':4', ':1' segments depict the number of processes you want to run on each node.

Non interactive job

This runs on the scheduler SLURM

#SBATCH --ntasks-per-node 16
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive


module purge
module add gcc/5.2.0
module add mvapich2/2.2/gcc-5.2.0

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

srun -n16 --mpi=pmi2  /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100

And passing it to SLURM

[username@login01 ~]$ sbatch mpidemo.job
Submitted batch job 889552

Further Information

Icon home.png