Difference between revisions of "Programming/OpenMPI"

From HPC
Jump to: navigation , search
m
m
Line 15: Line 15:
  
  
<pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;">
+
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
  
 
#include <mpi.h>
 
#include <mpi.h>
Line 49: Line 49:
 
The following modules are available for OpenMPI:
 
The following modules are available for OpenMPI:
  
* module load gcc/4.9.3 (GNU compiler)
+
* module add gcc/4.9.3 (GNU compiler)
* module load intel/compiler/64/2016.2.181 (Intel compiler)
+
* module add intel/compiler/64/2016.2.181 (Intel compiler)
  
* module load openmpi/gcc/1.10.2
+
* module add openmpi/gcc/1.10.2
* module load openmpi/gcc/1.10.5
+
* module add openmpi/gcc/1.10.5
* module load openmpi/intel/1.10.2
+
* module add openmpi/intel/1.10.2
* module load openmpi/intel/1.8.8
+
* module add openmpi/intel/1.8.8
* module load openmpi/intel/2.0.1
+
* module add openmpi/intel/2.0.1
  
  
Line 73: Line 73:
 
=== Batch Submission ===
 
=== Batch Submission ===
  
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
+
 
 +
<pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;">
  
 
#!/bin/bash
 
#!/bin/bash

Revision as of 12:00, 8 February 2017

Programming Details

MPI defines not only point-to-point communication (e.g., send and receive), it also defines other communication patterns, such as collective communication. Collective operations are where multiple processes are involved in a single communication action. Reliable broadcast, for example, is where one process has a message at the beginning of the operation, and at the end of the operation, all processes in a group have the message.

Message-passing performance and resource utilization are the king and queen of high-performance computing. Open MPI was specifically designed in such a way that it could operate at the very bleeding edge of high performance: incredibly low latencies for sending short messages, extremely high short message injection rates on supported networks, fast ramp-ups to maximum bandwidth for large messages, etc.

The Open MPI code has 3 major code modules:

  • OMPI - MPI code
  • ORTE - the Open Run-Time Environment
  • OPAL - the Open Portable Access Layer


Program Example


#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv)
{
        int rank;
        int buf;
        MPI_Status status;
        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        if(rank == 0)
        {
                buf = 777;
                MPI_Bcast(&buf, 1, MPI_INT, 0, MPI_COMM_WORLD);
        }
        else
        {
                MPI_Recv(&buf, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
                printf("rank %d receiving received %d\n", rank, buf);
        }
        MPI_Finalize();
        return 0;
}


Modules Available

The following modules are available for OpenMPI:

  • module add gcc/4.9.3 (GNU compiler)
  • module add intel/compiler/64/2016.2.181 (Intel compiler)
  • module add openmpi/gcc/1.10.2
  • module add openmpi/gcc/1.10.5
  • module add openmpi/intel/1.10.2
  • module add openmpi/intel/1.8.8
  • module add openmpi/intel/2.0.1


Compilation


[username@login01 ~]$ module add gcc/4.9.3
[username@login01 ~]$ gcc -o testMPI testMPI.c


Usage Examples

Batch Submission


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add gcc/4.9.3
module add openmpi/gcc/1.10.2

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user/CODE_SAMPLES/OPENMPI/scatteravg 100


[username@login01 ~]$ sbatch MPI-demo.job
Submitted batch job 289523

Further Information