20085eeM116C_1_Quiz3 - Quiz 3 Show your work in arriving at the answers Calculators one double sided handwritten sheet and MIPS cheat sheet are

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Quiz 3 Show your work in arriving at the answers. Calculators, one double sided handwritten sheet and MIPS cheat sheet are allowed. Total time: 1 hour Q1. 35 points A processor has K‐bit addressing. It uses a cache with B bytes per block and N‐way associativity. The cache size (i.e. the total data it stores excluding any addressing‐related bits) is M bytes. a) How many bits are used for offset, index and tag in the K‐bit address ? b) What is the total bytes “wasted” on storing tags in the cache ? (i.e. the real cache size is M + tag storage ignoring the valid/dirty bits) c) What problem arises if B is 5 ? Can you suggest some way of working around it ? Answer: a) There are (K‐ log2 b) M B M M NB sets. So we need log2 M NB bits for index. And we need log2B bits for offset. So we need NB ‐ log2B) bits for tag. M NB *( K‐ log2 ‐ log2B)/8 c) If the block size is 5 bytes, we still need 3 bits for offset. In that case, after we locate a cache block by index and tag, if the offset is bigger than 4 and smaller than 8, we cannot find the corresponding data in the cache block. One way to deal with the problem is that we still use 3 offset bits but have hardware to snap it to 5 bytes only (i.e. error, cycle or round of offsets > 5) but still reuse the last bit in index. I.e., byte 5 will be duplicated (available as byte 5 of set n and byte 0 of set n+1). Q2. 10 points Assuming that a cache miss penalty is essentially time to read data from main memory, cache reads being 10X faster than the memory reads, how high should the cache miss rate be for cache to be essentially useless ? Answer: Suppose the miss rate is r, then the hit rate is 1‐r. So we have 10 = (1‐r)*1+r*(10+1) r=90% Q3. 5 points A fully associative cache uses LRU replacement policy. If you know the exact memory reference sequence of your program, what can you do (hint: insert some dummy instructions in the program) to reduce cache misses ? Ofcourse, in reality you may never do this. Answer: Add dummy load instruction referencing a word in the block. ...
View Full Document

This note was uploaded on 04/12/2010 for the course EE M116C taught by Professor Puneetgupta during the Fall '08 term at UCLA.

Ask a homework question - tutors are online