a. You could also have a one dimensional mapping (stripe the domain in, say the horizontal
direction and assign each stripe to a process. When is it (likely to be) advantagous to use the
cartesian grid topology versus the striped topology?
b. Explain why there is a danger of deadlock when exchanging boundary points (“ghost cells”)
with sends and receives and how you managed to avoid the deadlock.
c. For main2.c provide the two timing plots (time versus p) and comment on what you see.
How does this relate to scaled speed–up? What would you expect to see for the case of the fixed
number of iterations in an ideal world of zero overhead? From the third plot, can you explain
how the efficiency deteriorated with the realistic situation of main2.c. What makes the second
curve worse than the first? Any conclusions on scaled speed-up in the iterative context?
d. In theory it should not be necessary to scale the probability vector at the end. Why? Why
may it still be a good idea to do so?
e. The lab illustrates a basic approach to sparse matrix computations in parallel. It implements
an iterative appro