This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 14:440:127 Introduction to Computers for Engineers Notes for Lecture 13 Rutgers University, Spring 2010 Instructor Blase E. Ur 1 Using Multiple Cores / Distributed Computing http://www.mathworks.com/access/helpdesk/help/toolbox/distcomp/f36010.html contains a good primer for this section since its too new to be covered in your book. In recent years, the trend has been to create processors (CPUs) with multiple cores. It started with dual (2) core processors. Now, 4 cores are quite common, and the thought is that this trend will continue. Supercomputers now have thousands of cores and processors. However, Matlab runs on just one processor/core by default. To change this, you need to use the parallel computing toolbox that comes with recent (professional, not student) versions of Matlab. In Matlab, were used to writing loops like this: z = zeros(1,1000000); tic; for a = 1:1000000 z(a) = isprime(a); end toc %%% Elapsed time is 73.615442 seconds However, we can invoke the parallel computing toolbox through a two step process and end up nearly cutting the time it takes to run that code in half. Im going to try the same operation on my laptops dual core processor, but first tell Matlab were using parallel computing tools by using the matlabpool command, and then using parfor instead of for to create a parallel for loop: matlabpool open local 2 % this tells Matlab Im using 2 cores % on my local processor z = zeros(1,1000000); tic; parfor a = 1:1000000 z(a) = isprime(a); end toc matlabpool close; %%% Elapsed time is 37.690607 seconds We can similarly distribute the responsibilities for matrices among cores by using the codistributed function: matlabpool open local 2 % this tells Matlab Im using 2 cores % on my local processor x = 1:1000000; xd = codistributed(x); yd = cos(xd); % yd will also be codistributed between cores matlabpool close; Finally, we can run code in parallel on the cores using an spmd...end block, which stands for single program, multiple data: 1 tic; h = rand(10000,1000); a = toc a = 34.61 clear h; matlabpool open local 2; spmd; tic; h = rand(10000,1000); a = toc; end; matlabpool close; a(1) = 0.2541 a(2) = 0.2885 However, note that starting up the parallel toolbox (matlabpool, spmd, and similar commands) is INTENSELY SLOW right now. Hopefully, theyll fix that in later versions of Matlab. Even as it is now, though, if you have a lot of number crunching to do in the middle of your program and have a multicore processor, consider using parallel computing tools. 2 Recursion Theres a very interesting technique in computer programming called recursion, in which you write a function that calls a variation of itself to get the answer. Eventually, this string of functions calling itself stops at some base case, and the answers work their way back up. Of course, in order to understand recursion, you must understand recursion. (Thats a joke)....
View Full
Document
 Spring '08
 Finch
 Distributed Computing

Click to edit the document details