16-par-intro - Last Time Parallel Cluster Programming CMU...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Parallel Cluster Programming CMU 15-440: Distributed Systems Last Time Questions about Paxos / distributed consensus? If you haven’t picked up your exam yet, it’s with my admin (see web page) Parallelism ubiquitous Even your laptop, if your parents bought you something decent :), has 2 or more cores in it. Each of those cores is actually internally parallel, but that’s not this course. The question of the decade in computing (not exaggerating): How to exploit parallelism. Q1: What’s the workload?? Before we try to solve it, let’s look brieFy at the map of problems and solutions. .. “Trivially parallel” -- e.g., password cracking. Requires little communication, memory, etc. Compute-intensive -- e.g., physical mesh simulations in HPC. Often very memory intensive, need low-latency, high-bw communication Data-intesive -- e.g., computing an index of the web, data- mining, etc. Often disk intensive, bandwidth intensive, but not as latency sensitive.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Q2: The challenges First: Algorithmic. How do you solve problem X in parallel? Sometimes this is very hard; sometimes it’s straightforward. For our purposes, let’s assume the answer is known. Otherwise, take Guy Blelloch’s course. Second: Systems. How do you practically solve problem X using reasonable amounts of programmer time, ef±ciently use the parallel resources, etc.? Helping Programmers Use Clusters As systems people, our job is to make stuff work. “Tools to make tools.” What tools can we provide to ease the pain of parallel computing? Step 0: We can make it easy to execute code on lots of machines. .. and to share those machines between “jobs” (cf Condor) Step 0a: We can provide you with a shared distributed ±lesystem, or copy your programs onto these machines automatically Step 1: We can give you easy to use mechanisms for communication between processes (RPC, or, in the high-performance domain, MPI.) Step 1a: MPI slightly higher level: library knows which other computers are involved in the processing, handles process spawning (step 0), provides message sending, broadcast, many-to-1, get(), put() abstraction for shared memory regions, synchronization, etc.) Observations: These tools are very general - you can use them to solve any parallel problem These tools are very low-level - they don’t deal with a lot of very hard questions in parallel programming! Some hard questions Psst - you saw a lot of these in lab 1! Dealing with failures (it’s tough - even your professor puts bugs into complex algorithms when explaining them!) Dealing with load balance - ef±ciently keeping all of the machines busy, splitting up the work into chunks, etc.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/02/2011 for the course CS 440 taught by Professor Anderson during the Spring '11 term at Carnegie Mellon.

Page1 / 7

16-par-intro - Last Time Parallel Cluster Programming CMU...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online