Difference between revisions of "Programming/OpenMP"

From HPC
Jump to: navigation , search
(Created page with "== Programming Details == OpenMP is designed for multi-processor/core, shared memory machines. The underlying architecture can be shared memory UMA or NUMA. It is an Applica...")
 
m
Line 16: Line 16:
 
* Serializing sections of code
 
* Serializing sections of code
 
* Synchronization of work among threads
 
* Synchronization of work among threads
 +
 +
== Usage Examples ==
 +
 +
<pre style="background-color: #C8C8C8; color: black; border: 2px solid blue; font-family: monospace, sans-serif;">
 +
#!/bin/bash
 +
 +
#SBATCH -J openmpi-single-node
 +
#SBATCH -N 1
 +
#SBATCH --ntasks-per-node 28
 +
#SBATCH -D /home/user/CODE_SAMPLES/OPENMP
 +
#SBATCH -o %N.%j.%a.out
 +
#SBATCH -e %N.%j.%a.err
 +
#SBATCH -p compute
 +
#SBATCH --exclusive
 +
 +
echo $SLURM_JOB_NODELIST
 +
 +
module purge
 +
module load gcc/4.9.3
 +
 +
export I_MPI_DEBUG=5
 +
export I_MPI_FABRICS=shm:tmi
 +
export I_MPI_FALLBACK=no
 +
 +
/home/user/CODE_SAMPLES/OPENMP/demo
 +
 +
</pre>
 +
 +
 +
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ sbatch demo.job
 +
Submitted batch job 289552
 +
</pre>

Revision as of 11:11, 30 January 2017

Programming Details

OpenMP is designed for multi-processor/core, shared memory machines. The underlying architecture can be shared memory UMA or NUMA.

It is an Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism. Comprised of three primary API components:

  • Compiler Directives
  • Runtime Library Routines
  • Environment Variables

OpenMP compiler directives are used for various purposes:

  • Spawning a parallel region
  • Dividing blocks of code among threads
  • Distributing loop iterations between threads
  • Serializing sections of code
  • Synchronization of work among threads

Usage Examples

#!/bin/bash

#SBATCH -J openmpi-single-node
#SBATCH -N 1
#SBATCH --ntasks-per-node 28
#SBATCH -D /home/user/CODE_SAMPLES/OPENMP
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module load gcc/4.9.3

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

/home/user/CODE_SAMPLES/OPENMP/demo


[username@login01 ~]$ sbatch demo.job
Submitted batch job 289552