Difference between revisions of "Applications/OpenMPI"

From HPC
Jump to: navigation , search
m (Pysdlb moved page OpenMPI to Applications/OpenMPI without leaving a redirect)
m
 
(18 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
__TOC__
 +
 
==Application Details==
 
==Application Details==
  
* Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).
+
* Description: The Open MPI Project is an open-source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is, therefore, able to combine the expertise, technologies, and resources from all across the High-Performance Computing community in order to build the best MPI library available.  
* Version: 6.5.14, 7.5.18 and 8.0.61
+
* Version: 1.8.8 (gcc), 1.10.5 (gcc and intel), 2.0.2 (gcc and intel), openmpi/3.0.0 (gcc and intel)
* Modules: cuda/6.5.14, cuda/7.5.18 and cuda/8.0.61
+
* Modules: openmpi/1.10.5/gcc-5.2.0, openmpi/1.10.5/gcc-6.3.0, openmpi/1.10.5/intel-2017, openmpi/2.0.2/gcc-5.2.0, openmpi/2.0.2/gcc-6.3.0, openmpi/gcc/1.10.2, openmpi/gcc/1.10.5, openmpi/intel/1.10.2, openmpi/intel/1.8.8, openmpi/intel/2.0.1, openmpi/3.0.0/gcc-5.2.0, openmpi/3.0.0/gcc-6.3.0 and openmpi/3.0.0/gcc-8.2.0
* Licence: Free, but owned by NVidia
+
* Licence: Software in the Public Interest non-profit organization.
  
 
==Usage Examples==
 
==Usage Examples==
  
'''Note''': this example is done on a node with a GPU accelerator, usually access would be achieved with the scheduler
+
Message Passing Interface ('''MPI''') is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged the development of portable and scalable large-scale parallel applications.
 +
 
 +
===Interactive Mode===
 +
 
 +
This example runs on an interactive session.
  
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
  
[username@gpu01 ~]$ module cuda/8.0.61
+
[username@login01 ~]$ interactive
[username@gpu01 ~]$ ./gpuTEST
+
salloc: Granted job allocation 629854
 +
Job ID 629854 connecting to c001, please wait...
 +
 
 +
[username@c001 ~]$ module add openmpi/gcc/8.2.0
 +
[username@c001 ~]$ mpirun -np 20 mpiTEST
  
 
</pre>
 
</pre>
  
==Further Information==
+
===Batch Job===
 +
 
 +
This runs on the scheduler [[Quickstart/Slurm|SLURM]]
 +
 
 +
<pre style="font-family: monospace, sans-serif;">
 +
 
 +
#!/bin/bash
 +
#SBATCH -J MPI-testXX
 +
#SBATCH -N 10
 +
#SBATCH --ntasks-per-node 28
 +
#SBATCH -D /home/user1/CODE_SAMPLES/OPENMPI
 +
#SBATCH -o %N.%j.%a.out
 +
#SBATCH -e %N.%j.%a.err
 +
#SBATCH -p compute
 +
#SBATCH --exclusive
 +
 
 +
echo $SLURM_JOB_NODELIST
 +
 
 +
module purge
 +
module add gcc/8.2.0
 +
module add openmpi/gcc/8.2.0
 +
 
 +
export I_MPI_DEBUG=5
 +
export I_MPI_FABRICS=shm:tmi
 +
export I_MPI_FALLBACK=no
 +
 
 +
mpirun -mca pml cm -mca mtl psm2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100
 +
 
 +
</pre>
 +
 
 +
And passing it to  [[Quickstart/Slurm|SLURM]]:
 +
 
 +
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
 
 +
[username@login01 ~]$ sbatch mpidemo.job
 +
Submitted batch job 1889552
 +
 
 +
</pre>
 +
 
 +
==Next Steps==
 +
 
 +
* [[Programming/OpenMPI|OpenMPI]]
 +
* [https://en.wikipedia.org/wiki/Message_Passing_Interface https://en.wikipedia.org/wiki/Message_Passing_Interface]
 +
* [https://www.open-mpi.org/ https://www.open-mpi.org/]
  
[[Programming/openMPI|openMPI]]
+
{{Librariespagenav}}

Latest revision as of 10:33, 9 December 2022

Application Details

  • Description: The Open MPI Project is an open-source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is, therefore, able to combine the expertise, technologies, and resources from all across the High-Performance Computing community in order to build the best MPI library available.
  • Version: 1.8.8 (gcc), 1.10.5 (gcc and intel), 2.0.2 (gcc and intel), openmpi/3.0.0 (gcc and intel)
  • Modules: openmpi/1.10.5/gcc-5.2.0, openmpi/1.10.5/gcc-6.3.0, openmpi/1.10.5/intel-2017, openmpi/2.0.2/gcc-5.2.0, openmpi/2.0.2/gcc-6.3.0, openmpi/gcc/1.10.2, openmpi/gcc/1.10.5, openmpi/intel/1.10.2, openmpi/intel/1.8.8, openmpi/intel/2.0.1, openmpi/3.0.0/gcc-5.2.0, openmpi/3.0.0/gcc-6.3.0 and openmpi/3.0.0/gcc-8.2.0
  • Licence: Software in the Public Interest non-profit organization.

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged the development of portable and scalable large-scale parallel applications.

Interactive Mode

This example runs on an interactive session.


[username@login01 ~]$ interactive
salloc: Granted job allocation 629854
Job ID 629854 connecting to c001, please wait...

[username@c001 ~]$ module add openmpi/gcc/8.2.0
[username@c001 ~]$ mpirun -np 20 mpiTEST

Batch Job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/user1/CODE_SAMPLES/OPENMPI
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add gcc/8.2.0
module add openmpi/gcc/8.2.0

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user1/CODE_SAMPLES/OPENMPI/scatteravg 100

And passing it to SLURM:


[username@login01 ~]$ sbatch mpidemo.job
Submitted batch job 1889552

Next Steps





Libraries | Main Page | Further Topics