Applications/OpenMPI

From HPC
Revision as of 13:28, 24 May 2019 by Pysdlb (talk | contribs)

Jump to: navigation , search

Application Details

  • Description: The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available.
  • Version: 1.8.8 (gcc), 1.10.5 (gcc and intel), 2.0.2 (gcc and intel), openmpi/3.0.0 (gcc and intel)
  • Modules: openmpi/1.10.5/gcc-5.2.0, openmpi/1.10.5/gcc-6.3.0, openmpi/1.10.5/intel-2017, openmpi/2.0.2/gcc-5.2.0, openmpi/2.0.2/gcc-6.3.0, openmpi/gcc/1.10.2, openmpi/gcc/1.10.5, openmpi/intel/1.10.2, openmpi/intel/1.8.8, openmpi/intel/2.0.1, openmpi/3.0.0/gcc-5.2.0, openmpi/3.0.0/gcc-6.3.0 and openmpi/3.0.0/gcc-8.2.0
  • Licence: Software in the Public Interest non-profit organization.

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Interactive Mode

This example runs on an interactive session.


[username@login01 ~]$ interactive
salloc: Granted job allocation 629854
Job ID 629854 connecting to c001, please wait...

[username@c001 ~]$ module add openmpi/gcc/8.2.0
[username@c001 ~]$ mpirun -np 20 mpiTEST

Non interactive job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/user1/CODE_SAMPLES/OPENMPI
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add gcc/8.2.0
module add openmpi/gcc/8.2.0

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100

And passing it to SLURM:


[username@login01 ~]$ sbatch mpidemo.job
Submitted batch job 1889552


Further Information


Navigation