Difference between revisions of "Programming/OpenMPI"

From HPC
Jump to: navigation , search
m
m
Line 12: Line 12:
  
  
=== Program Example ===
+
=== Program Examples ===
  
 +
==== C Examples ====
  
 
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
Line 44: Line 45:
 
</pre>
 
</pre>
  
 +
==== Fortran example ====
 +
 +
<pre style="background-color: #f5f5dc; color: black; font-family: monospace, sans-serif;">
 +
 +
program hello
 +
  include 'mpif.h'
 +
  integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
 +
 
 +
  call MPI_INIT(ierror)
 +
  call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
 +
  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
 +
  print*, 'node', rank, ': Hello world'
 +
  call MPI_FINALIZE(ierror)
 +
end
 +
 +
</pre>
  
 
==== Modules Available ====
 
==== Modules Available ====

Revision as of 11:02, 22 February 2017

Programming Details

MPI defines not only point-to-point communication (e.g., send and receive), it also defines other communication patterns, such as collective communication. Collective operations are where multiple processes are involved in a single communication action. Reliable broadcast, for example, is where one process has a message at the beginning of the operation, and at the end of the operation, all processes in a group have the message.

Message-passing performance and resource utilization are the king and queen of high-performance computing. Open MPI was specifically designed in such a way that it could operate at the very bleeding edge of high performance: incredibly low latencies for sending short messages, extremely high short message injection rates on supported networks, fast ramp-ups to maximum bandwidth for large messages, etc.

The Open MPI code has 3 major code modules:

  • OMPI - MPI code
  • ORTE - the Open Run-Time Environment
  • OPAL - the Open Portable Access Layer


Program Examples

C Examples


#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv)
{
        int rank;
        int buf;
        MPI_Status status;
        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        if(rank == 0)
        {
                buf = 777;
                MPI_Bcast(&buf, 1, MPI_INT, 0, MPI_COMM_WORLD);
        }
        else
        {
                MPI_Recv(&buf, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
                printf("rank %d receiving received %d\n", rank, buf);
        }
        MPI_Finalize();
        return 0;
}

Fortran example


program hello
   include 'mpif.h'
   integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
   
   call MPI_INIT(ierror)
   call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
   call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
   print*, 'node', rank, ': Hello world'
   call MPI_FINALIZE(ierror)
end

Modules Available

The following modules are available for OpenMPI:

  • module add gcc/4.9.3 (GNU compiler)
  • module add intel/compiler/64/2016.2.181 (Intel compiler)
  • module add openmpi/gcc/1.10.2
  • module add openmpi/gcc/1.10.5
  • module add openmpi/intel/1.10.2
  • module add openmpi/intel/1.8.8
  • module add openmpi/intel/2.0.1


Compilation


[username@login01 ~]$ module add gcc/4.9.3
[username@login01 ~]$ gcc -o testMPI testMPI.c


Usage Examples

Batch Submission


#!/bin/bash
#SBATCH -J MPI-testXX
#SBATCH -N 10
#SBATCH --ntasks-per-node 28
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add gcc/4.9.3
module add openmpi/gcc/1.10.2

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -mca pml cm -mca mtl psm2 /home/user/CODE_SAMPLES/OPENMPI/scatteravg 100


[username@login01 ~]$ sbatch MPI-demo.job
Submitted batch job 289523

Further Information