This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e. If you had a chip that took up 80% of a 300mm (12
inch) wafer, it would take about 565 cm2. ( ⁄) This is deceptive – dies this large would likely involve more errorprone processes, effectively
reducing this yield to zero. 3. Exercise 1.10, dataset b:
First, note that the data table gives instruction counts per processor, and that the CPI for each
instruction class is different. This may seem more difficult, but we can reuse the same equation
from problem 1, exercise 1.3. Notice that we can rewrite the numerator as follows: This makes it easier to compute the answers for this problem. Simply compute the total number
of cycles for each processor as needed.
Finally, a spreadsheet makes this work easy. (You'd need to paste your table into your homework
to earn full marks. We need to see work, not just answers.) 1.10.1: Simply sum up the instruction counts: Then, multiply these by the number of processors to get the aggregate number for each
multiprocessor system: 1.10.2: First, find the number of cycles per processor:
()
()
()
() ()
()
()
() ()
()
()
() Then, compute execution time per processor. Since each processor runs in parallel, this is the
final execution time: 1.10.3: This is just a repeat of problem 1.10.2, except the CPI for arithmetic instructions is
doubled.
First, find the number of cycles per processor:
()
()
()
() ()
()
()
() ()
()
()
() Then, compute execution time per processor. Since each processor runs in parallel, this is the
final execution time:
( )
( ) ( )
( ) (
( )
) ( )
( ) The net effect is that the multiprocessor systems with small numbers of processors are slowed
down more  this makes sense. They execute larger numbers of arithmetic instructions, and these
instructions consume more cycles (proportionally).
Answer for followup question:
In nearly every program there are sequential sections of code. These are parts of the program that
must wait for some previous computation. These cannot be distributed across parallel processors
for speedup. There is almost always some sequential overhe...
View Full
Document
 Fall '11
 PETER
 Frequency

Click to edit the document details