Difference between revisions of "Applications/Mpich"

From HPC
Jump to: navigation , search
m
m
Line 37: Line 37:
 
c004:1
 
c004:1
 
</pre>
 
</pre>
 +
 +
* The ':2', ':4', ':1' segments depict the number of processes you want to run on each node.
  
  
Line 82: Line 84:
 
==Further Information==
 
==Further Information==
  
* [[Programming/OpenMPI|OpenMPI]]
+
* [http://www.mpich.org/ http://www.mpich.org/]
 
* [https://en.wikipedia.org/wiki/Message_Passing_Interface https://en.wikipedia.org/wiki/Message_Passing_Interface]
 
* [https://en.wikipedia.org/wiki/Message_Passing_Interface https://en.wikipedia.org/wiki/Message_Passing_Interface]
* [https://www.open-mpi.org/ https://www.open-mpi.org/]
+
 
  
 
{|
 
{|

Revision as of 08:22, 18 April 2017

Application Details

  • Description: MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries. It implements all versions of the MPI standard including MPI-1, MPI-2, MPI-2.1, MPI-2.2, and MPI-3.
  • Version: 3.2 (compiled with gcc)
  • Modules: mpich/3.2/gcc-5.2.0
  • Licence: Open source under Github

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran.

Interactive Mode

The MPI Standard describes mpiexec as a suggested way to run MPI programs. MPICH implements the mpiexec standard, and also provides some extensions. This example runs on a reserved node (eg. c001)

[username@c001 ~]$ module mpich/3.2/gcc-5.2.0
[username@c001 ~]$ mpiexec -n 28 mpiTEST

This can be run with a host file, the file here is called machinefile:

[username@c001 ~]$ module mpich/3.2/gcc-5.2.0
[username@c001 ~]$ mpiexec -f machinefile -n 28 mpiTEST

The 'machinefile' is of the form:

c001
c002:2
c003:4 
c004:1
  • The ':2', ':4', ':1' segments depict the number of processes you want to run on each node.


Non interactive job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/user1/CODE_SAMPLES/OPENMPI
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module load gcc/4.9.3
module load openmpi/gcc/1.10.2

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100

And passing it to SLURM


[username@login01 ~]$ sbatch mpidemo.job
Submitted batch job 889552


Further Information


Icon home.png