Difference between revisions of "Applications/OpenMPI"

From HPC
Jump to: navigation , search
m (Pysdlb moved page OpenMPI to Applications/OpenMPI without leaving a redirect)
m
Line 1: Line 1:
 
==Application Details==
 
==Application Details==
  
* Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).
+
* Description: The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available.  
* Version: 6.5.14, 7.5.18 and 8.0.61
+
* Version: 1.8.8 (gcc), 1.10.5 (gcc and intel), 2.0.2 (gcc and intel)
* Modules: cuda/6.5.14, cuda/7.5.18 and cuda/8.0.61
+
* Modules: openmpi/1.10.5/gcc-5.2.0, openmpi/1.10.5/gcc-6.3.0, openmpi/1.10.5/intel-2017, openmpi/2.0.2/gcc-5.2.0, openmpi/2.0.2/gcc-6.3.0, openmpi/gcc/1.10.2, openmpi/gcc/1.10.5, openmpi/intel/1.10.2, openmpi/intel/1.8.8, and openmpi/intel/2.0.1
* Licence: Free, but owned by NVidia
+
* Licence: Software in the Public Interest non-profit organization.
  
 
==Usage Examples==
 
==Usage Examples==
  
'''Note''': this example is done on a node with a GPU accelerator, usually access would be achieved with the scheduler
+
Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.
 +
 
 +
===Interactive Mode===
 +
 
 +
This example runs on a reserved node (''eg'' '''c001''')
  
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
  
[username@gpu01 ~]$ module cuda/8.0.61
+
[username@c001 ~]$ module openmpi/gcc/1.10.5
[username@gpu01 ~]$ ./gpuTEST
+
[username@c001 ~]$ mpirun -np 20 mpiTEST
  
 
</pre>
 
</pre>
 +
 +
===SLURM job===
 +
 +
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
 +
#!/bin/bash
 +
#SBATCH -J MPI-testXX
 +
#SBATCH -N 10
 +
#SBATCH --ntasks-per-node 28
 +
#SBATCH -D /home/pysdlb/CODE_SAMPLES/OPENMPI
 +
#SBATCH -o %N.%j.%a.out
 +
#SBATCH -e %N.%j.%a.err
 +
#SBATCH -p compute
 +
#SBATCH --exclusive
 +
 +
echo $SLURM_JOB_NODELIST
 +
 +
module purge
 +
module load gcc/4.9.3
 +
module load openmpi/gcc/1.10.2
 +
 +
export I_MPI_DEBUG=5
 +
export I_MPI_FABRICS=shm:tmi
 +
export I_MPI_FALLBACK=no
 +
 +
mpirun -mca pml cm -mca mtl psm2 /home/pysdlb/CODE_SAMPLES/OPENMPI/scatteravg 100
 +
 +
</pre>
 +
  
 
==Further Information==
 
==Further Information==
  
[[Programming/openMPI|openMPI]]
+
* [[Programming/openMPI|openMPI]]
 +
* [https://en.wikipedia.org/wiki/Message_Passing_Interface https://en.wikipedia.org/wiki/Message_Passing_Interface]
 +
* [https://www.open-mpi.org/ https://www.open-mpi.org/]

Revision as of 12:43, 3 April 2017

Application Details

  • Description: The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available.
  • Version: 1.8.8 (gcc), 1.10.5 (gcc and intel), 2.0.2 (gcc and intel)
  • Modules: openmpi/1.10.5/gcc-5.2.0, openmpi/1.10.5/gcc-6.3.0, openmpi/1.10.5/intel-2017, openmpi/2.0.2/gcc-5.2.0, openmpi/2.0.2/gcc-6.3.0, openmpi/gcc/1.10.2, openmpi/gcc/1.10.5, openmpi/intel/1.10.2, openmpi/intel/1.8.8, and openmpi/intel/2.0.1
  • Licence: Software in the Public Interest non-profit organization.

Usage Examples

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Interactive Mode

This example runs on a reserved node (eg c001)


[username@c001 ~]$ module openmpi/gcc/1.10.5
[username@c001 ~]$ mpirun -np 20 mpiTEST

SLURM job


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/pysdlb/CODE_SAMPLES/OPENMPI
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module load gcc/4.9.3
module load openmpi/gcc/1.10.2

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/pysdlb/CODE_SAMPLES/OPENMPI/scatteravg 100


Further Information