Difference between revisions of "Applications/Mpich"

From HPC
Jump to: navigation , search
(Created page with "_TOC__ ==Application Details== * Description: MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries. It implements all ver...")
 
m (Navigation)
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
_TOC__
+
__TOC__
  
 
==Application Details==
 
==Application Details==
  
 
* Description:  MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries. It implements all versions of the MPI standard including MPI-1, MPI-2, MPI-2.1, MPI-2.2, and MPI-3.   
 
* Description:  MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries. It implements all versions of the MPI standard including MPI-1, MPI-2, MPI-2.1, MPI-2.2, and MPI-3.   
* Version: 3.2 (compiled with gcc)
+
* Version: 3.2 (compiled with gcc or intel)
* Modules: mpich/3.2/gcc-5.2.0
+
* Modules: mpich/3.2/gcc-5.2.0 and mpich/intel/3.2.0
 
* Licence:  Open source under Github
 
* Licence:  Open source under Github
  
Line 18: Line 18:
  
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
[username@c001 ~]$ module mpich/3.2/gcc-5.2.0
+
[username@c001 ~]$ module add mpich/3.2/gcc-5.2.0
 
[username@c001 ~]$ mpiexec -n 28 mpiTEST
 
[username@c001 ~]$ mpiexec -n 28 mpiTEST
 
</pre>
 
</pre>
Line 25: Line 25:
  
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
[username@c001 ~]$ module mpich/3.2/gcc-5.2.0
+
[username@c001 ~]$ module add mpich/3.2/gcc-5.2.0
 
[username@c001 ~]$ mpiexec -f machinefile -n 28 mpiTEST
 
[username@c001 ~]$ mpiexec -f machinefile -n 28 mpiTEST
 
</pre>
 
</pre>
Line 37: Line 37:
 
c004:1
 
c004:1
 
</pre>
 
</pre>
 +
 +
* The ':2', ':4', ':1' segments depict the number of processes you want to run on each node.
  
  
Line 44: Line 46:
 
This runs on the scheduler SLURM
 
This runs on the scheduler SLURM
  
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
+
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
  
 
#!/bin/bash
 
#!/bin/bash
 
#SBATCH -J MPI-testXX
 
#SBATCH -J MPI-testXX
#SBATCH -N 10
+
#SBATCH -N 1
#SBATCH --ntasks-per-node 28
+
#SBATCH --ntasks-per-node 4
#SBATCH -D /home/user1/CODE_SAMPLES/OPENMPI
+
#SBATCH -D /home/user1/CODE_SAMPLES/MPICH
 
#SBATCH -o %N.%j.%a.out
 
#SBATCH -o %N.%j.%a.out
 
#SBATCH -e %N.%j.%a.err
 
#SBATCH -e %N.%j.%a.err
Line 59: Line 61:
  
 
module purge
 
module purge
module load gcc/4.9.3
+
module add gcc/4.9.3
module load openmpi/gcc/1.10.2
+
module add mpich/3.2/gcc-5.2.0
  
 
export I_MPI_DEBUG=5
 
export I_MPI_DEBUG=5
Line 66: Line 68:
 
export I_MPI_FALLBACK=no
 
export I_MPI_FALLBACK=no
  
mpirun -mca pml cm -mca mtl psm2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100
+
srun -n4 --mpi=pmi2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100
  
 
</pre>
 
</pre>
Line 82: Line 84:
 
==Further Information==
 
==Further Information==
  
* [[Programming/OpenMPI|OpenMPI]]
+
* [http://www.mpich.org/ http://www.mpich.org/]
 
* [https://en.wikipedia.org/wiki/Message_Passing_Interface https://en.wikipedia.org/wiki/Message_Passing_Interface]
 
* [https://en.wikipedia.org/wiki/Message_Passing_Interface https://en.wikipedia.org/wiki/Message_Passing_Interface]
* [https://www.open-mpi.org/ https://www.open-mpi.org/]
+
* [https://slurm.schedmd.com/mpi_guide.html https://slurm.schedmd.com/mpi_guide.html]
  
{|
+
{{Modulepagenav}}
|style="width:5%; border-width: 0" | [[File:icon_home.png]]
 
|style="width:95%; border-width: 0" |
 
* [[Main_Page|Home]]
 
* [[Applications|Application support]]
 
* [[General|General]]
 
* [[Training|Training]]
 
* [[Programming|Programming support]]
 
|-
 
|}
 

Latest revision as of 10:51, 16 November 2022

Application Details

  • Description: MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries. It implements all versions of the MPI standard including MPI-1, MPI-2, MPI-2.1, MPI-2.2, and MPI-3.
  • Version: 3.2 (compiled with gcc or intel)
  • Modules: mpich/3.2/gcc-5.2.0 and mpich/intel/3.2.0
  • Licence: Open source under Github

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran.

Interactive Mode

The MPI Standard describes mpiexec as a suggested way to run MPI programs. MPICH implements the mpiexec standard, and also provides some extensions. This example runs on a reserved node (eg. c001)

[username@c001 ~]$ module add mpich/3.2/gcc-5.2.0
[username@c001 ~]$ mpiexec -n 28 mpiTEST

This can be run with a host file, the file here is called machinefile:

[username@c001 ~]$ module add mpich/3.2/gcc-5.2.0
[username@c001 ~]$ mpiexec -f machinefile -n 28 mpiTEST

The 'machinefile' is of the form:

c001
c002:2
c003:4 
c004:1
  • The ':2', ':4', ':1' segments depict the number of processes you want to run on each node.


Non interactive job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 1
#SBATCH --ntasks-per-node 4
#SBATCH -D /home/user1/CODE_SAMPLES/MPICH
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add gcc/4.9.3
module add mpich/3.2/gcc-5.2.0

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

srun -n4 --mpi=pmi2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100

And passing it to SLURM


[username@login01 ~]$ sbatch mpidemo.job
Submitted batch job 889552


Further Information





Modules | Main Page | Further Topics