Applications/OpenMPI

From HPC
Revision as of 08:03, 18 April 2017 by Pysdlb (talk | contribs)

Jump to: navigation , search

Application Details

  • Description: The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available.
  • Version: 1.8.8 (gcc), 1.10.5 (gcc and intel), 2.0.2 (gcc and intel)
  • Modules: openmpi/1.10.5/gcc-5.2.0, openmpi/1.10.5/gcc-6.3.0, openmpi/1.10.5/intel-2017, openmpi/2.0.2/gcc-5.2.0, openmpi/2.0.2/gcc-6.3.0, openmpi/gcc/1.10.2, openmpi/gcc/1.10.5, openmpi/intel/1.10.2, openmpi/intel/1.8.8, and openmpi/intel/2.0.1
  • Licence: Software in the Public Interest non-profit organization.

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Interactive Mode

This example runs on a reserved node (eg. c001)


[username@c001 ~]$ module openmpi/gcc/1.10.5
[username@c001 ~]$ mpirun -np 20 mpiTEST

Non interactive job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/user1/CODE_SAMPLES/OPENMPI
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module load gcc/4.9.3
module load openmpi/gcc/1.10.2

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100

And passing it to SLURM


[username@login01 ~]$ sbatch mpidemo.job
Submitted batch job 889552


Further Information

Icon home.png