OpenMP

OpenMP - Introduction to OpenMP Edward Valeev Department of...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Introduction to OpenMP Edward Valeev Department of Chemistry Virginia Tech Blacksburg, VA 2009 HPC-Chem Summer School, Knoxville, TN 1 Lecture Outline Basics of Shared-Memory Programming OpenMP Summary what it is good for what it is not good for OpenMP Minutae parallelizing loops parallel sections Example: SCF code using OpenMP homework: tweak MPI SCF code to use OpenMP 2009 HPC-Chem Summer School, Knoxville, TN Lecture Outline Basics of Shared-Memory Programming OpenMP Summary what it is good for what it is not good for OpenMP Minutae parallelizing loops parallel sections Example: SCF code using OpenMP homework: tweak MPI SCF code to use OpenMP 2009 HPC-Chem Summer School, Knoxville, TN 2 Thread vs. Process 2009 HPC-Chem Summer School, Knoxville, TN Process heavyweight task has an ID, instruction pointer, stack, heap, fle pointers, and other resources do not share address spaces (i.e. a heap pointer in 2 diFFerent processes corresponds to diFFerent physical memory addresses) Thread lightweight task has instruction pointer and stack share heap and other resources threads in a process share address spaces (i.e. a heap pointer in 2 diFFerent threads corresponds to the same physical memory address) message passing: MPI vs. threads static int task0_result; if (task_id == 0) task0_result = compute_result(); // MPI tasks are processes -- must communicate the result MPI_Send(&task0_result, 1, MPI_INT, -1, 0, MPI_COMM_WORLD); extern int task0_result; if (task_id == 0) task0_result = compute_result(); // all threads see global task0_result! 3 Uniprocessors vs. Shared-Memory Multiprocessors 2009 HPC-Chem Summer School, Knoxville, TN Core 0 execution unit L1 cache Core 1 execution unit L1 cache L2 cache generic 2-core processor RAM Core execution unit L1 cache L2 cache generic uniprocessor RAM ... and cache coherence is an issue! concurrent reads/writes of shared resources (e.g. global variables) by multiple threads produce undeFned results... 4 Race Conditions, Critical Sections 2009 HPC-Chem Summer School, Knoxville, TN // sum up the ids of all threads that executed this code static int thread_id_sum; const int thread_id = this_thread::get_id(); thread_id_sum += thread_id; race condition! races occur when multiple threads simultaneously mutate a shared resource; the outcome is not defned! // determine id of the last thread to execute this code static int thread_id_sum; const int thread_id = this_thread::get_id(); mutex lock; lock.lock(); thread_id_sum += thread_id; lock.unlock(); critical section critical section in a code is executed by one thread at a time Best practice avoid sharing resources between threads. Reducing the scope oF the data also improves the program design!...
View Full Document

Page1 / 19

OpenMP - Introduction to OpenMP Edward Valeev Department of...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online