This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e in a steady stream from some input device such as a microphone, a camera, or a network connection (see Chapter 12). As the data arrive, they are processed, sent to an output device, and eventually discarded to make room for newly arriving data. How well suited is the memory hierarchy for these streaming media workloads? Since the data are processed sequentially as they arrive, we able to derive some beneﬁt from spatial locality, as with our matrix multiply example from Section 6.6. However, since the data are processed once and then discarded, the amount of temporal locality is limited. To address this problem, system designers and compiler writers have pursued a strategy known as prefetching. The idea is to hide the latency of cache misses by anticipating which blocks will be accessed in the near future, and then fetching these blocks into the cache beforehand using special machine instructions. If the prefetching is done perfectly, then each block is copied into the cache just be...
View Full Document
- Spring '10
- The American