Difference between revisions of "Training/openMP"
(Created page with "__TOC__ ==Introduction== openMP allows threaded programming across a shared memory system, so on our HPC this means utilizing more than one processing core across one computi...") |
m |
||
Line 11: | Line 11: | ||
* one logical memory space | * one logical memory space | ||
* all cores refer to a memory location using the same address | * all cores refer to a memory location using the same address | ||
+ | |||
+ | ==Programming model== | ||
+ | |||
+ | Within the idea of the shared memory model we use the idea of '''threads''' which can share memory by all the other threads. These also have the following characteristics: | ||
+ | |||
+ | * Private data can only be accessed by the thread owning it | ||
+ | * Each thread can run simultaneous to other threads but also asynchronously, so we need to be careful of race conditions. | ||
+ | * Usually we have one thread per processing core, although there maybe hardware support for more (e.g. hyper-threading) | ||
+ | |||
+ | ==Thread Synchronization== | ||
+ | |||
+ | As previously mention threads execute asynchronous;y, which means each thread proceeds through program instruction independently of other threads. | ||
+ | |||
+ | Although this makes for a very flexible system we must be very careful about the actions on shared variables occur in the correct order. | ||
+ | |||
+ | * e.g. If we access a variable to read on thread 1 before thread 2 has written to it we will cause a program crash, likewise is updates to shared variables are accessed by different threads at the same time, one of the updates may get overwritten. | ||
+ | |||
+ | To prevent this happen we must either use variables that are independent of the different threads (ie different parts of an array) or perform some sort of synchronization within the code so different threads get to the same point at the same time. | ||
+ | |||
+ | |||
+ | |||
== Further Information == | == Further Information == |
Revision as of 15:13, 23 April 2018
Introduction
openMP allows threaded programming across a shared memory system, so on our HPC this means utilizing more than one processing core across one computing node.
A shared memory computer consists of a number of processing cores together with some memory. Shared memory systems is a single address space across the whole memory system.
- every processing core can read and write all memory locations in the system
- one logical memory space
- all cores refer to a memory location using the same address
Programming model
Within the idea of the shared memory model we use the idea of threads which can share memory by all the other threads. These also have the following characteristics:
- Private data can only be accessed by the thread owning it
- Each thread can run simultaneous to other threads but also asynchronously, so we need to be careful of race conditions.
- Usually we have one thread per processing core, although there maybe hardware support for more (e.g. hyper-threading)
Thread Synchronization
As previously mention threads execute asynchronous;y, which means each thread proceeds through program instruction independently of other threads.
Although this makes for a very flexible system we must be very careful about the actions on shared variables occur in the correct order.
- e.g. If we access a variable to read on thread 1 before thread 2 has written to it we will cause a program crash, likewise is updates to shared variables are accessed by different threads at the same time, one of the updates may get overwritten.
To prevent this happen we must either use variables that are independent of the different threads (ie different parts of an array) or perform some sort of synchronization within the code so different threads get to the same point at the same time.