Difference between revisions of "Programming/OpenMPI"

From HPC
Jump to: navigation , search
(Created page with "== Programming Details == MPI defines not only point-to-point communication (e.g., send and receive), it also defines other communication patterns, such as collective communi...")
 
m
Line 42: Line 42:
  
 
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
[username@login01 ~]$ sbatch starccm.job
+
[username@login01 ~]$ sbatch MPI-demo.job
 
Submitted batch job 289523
 
Submitted batch job 289523
 
</pre>
 
</pre>

Revision as of 11:30, 30 January 2017

Programming Details

MPI defines not only point-to-point communication (e.g., send and receive), it also defines other communication patterns, such as collective communication. Collective operations are where multiple processes are involved in a single communication action. Reliable broadcast, for example, is where one process has a message at the beginning of the operation, and at the end of the operation, all processes in a group have the message.

Message-passing performance and resource utilization are the king and queen of high-performance computing. Open MPI was specifically designed in such a way that it could operate at the very bleeding edge of high performance: incredibly low latencies for sending short messages, extremely high short message injection rates on supported networks, fast ramp-ups to maximum bandwidth for large messages, etc.

The Open MPI code has 3 major code modules:

  • OMPI - MPI code
  • ORTE - the Open Run-Time Environment
  • OPAL - the Open Portable Access Layer

Usage Examples

Batch Submission

#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/user/CODE_SAMPLES/OPENMPI
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module load gcc/4.9.3
module load openmpi/gcc/1.10.2

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user/CODE_SAMPLES/OPENMPI/scatteravg 100


[username@login01 ~]$ sbatch MPI-demo.job
Submitted batch job 289523