lecture38-may4 - Lecture 38 Announcements Exam 3 on Friday,...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 38 Announcements • Exam 3 on Friday, review Wed. • Topic of the day Parallel programming – an introduction Not in the book ! The von Neumann Computer ± Memory: holds the instructions and data ± Processing Unit: processes the information ± Input: external information into the memory ± Output: produces results for the user ± Control Unit: manages computer activity ± Single instruction, single data stream computer (SISD) Memory (RAM) Processing Unit Input Output MAR MDR ALU Control Unit PC IR * Register(s) device handler device handler ee306 bme303 Introduction to parallel computing and parallel programming What is Parallelism? • Parallelism and parallel problem solving are common • Parallel Computing is using more than one computer, or a computer with more than one processor, to solve a problem • Parallel programming is a strategy for performing a large, computational task faster. Task - A logically discrete unit of computational work – A large task can either be performed serially, one step following another, or can be decomposed into smaller tasks to be performed simultaneously, i.e., in parallel. – Assigning the smaller tasks to multiple worker processors (computers) to work on simultaneously – Coordinating the worker processors Control Abstraction open door open door details of enter door algorithm door algorithm simultaneously drink coffee drink coffee details of drink coffee algorithm C introduces parallelism into the system Fork Join I have 2 processors Terminology of Parallelism Parallelizable Problem - A problem that can be divided into parallel parts. Parallel Tasks - Tasks whose computations are (relatively) independent of each other, so that all such tasks can be performed simultaneously with correct results. Basic Types of Parallelism - Task parallelism : different tasks performed on different processors. Data parallelism : each processor performs same task,on different data. Synchronization - The temporal coordination of parallel tasks. It involves waiting until two or more tasks reach a specified point before continuing any of the tasks - to coordinate information exchange among tasks – Can consume wall-clock time because some tasks(s) can sit idle waiting for other tasks to complete; there are other potential complications – Can be a major factor in decreasing potential parallel speedup Parallel Overhead -
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/24/2010 for the course EE 312 taught by Professor Shafer during the Spring '08 term at University of Texas at Austin.

Page1 / 4

lecture38-may4 - Lecture 38 Announcements Exam 3 on Friday,...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online