36 Direct Mapping Associative memory is expensive compared to RAM In general

36 direct mapping associative memory is expensive

This preview shows page 36 - 51 out of 51 pages.

36
Image of page 36
Direct Mapping Associative memory is expensive compared to RAM In general case, there are 2^k words in cache memory and 2^n words in main memory (in our case, k=9, n=15) The n bit memory address is divided into two fields: k-bits for the index and n-k bits for the tag field The direct mapping cache organization uses the n bit address to access the main memory and the k-bit index to access the cache. 37
Image of page 37
38
Image of page 38
Direct Mapping the internal organization of the words in the cache memory 39
Image of page 39
Associative mapping The fastest and most flexible cache organization uses an associative memory The associative memory stores both the address and data of the memory word This permits any location in cache to store any word from main memory The address value of 15 bits is shown as a five- digit octal number and its corresponding 12- bit word is shown as a four-digit octal number 40
Image of page 40
Associative mapping 41
Image of page 41
Associative mapping A CPU address of 15 bits is places in the argument register and the associative memory is searched for a matching address If the address is found, the corresponding 12-bits data is read and sent to the CPU If not, the main memory is accessed for the word If the cache is full, an address-data pair must be displaced to make room for a pair that is needed and not presently in the cache 42
Image of page 42
43
Image of page 43
Set-Associative Mapping The disadvantage of direct mapping is that two words with the same index in their address but with different tag values cannot reside in cache memory at the same time Set-Associative Mapping is an improvement over the direct-mapping in that each word of cache can store two or more word of memory under the same index address 44
Image of page 44
Set-Associative Mapping 45
Image of page 45
Set-Associative Mapping In the slide, each index address refers to two data words and their associated tags Each tag requires 6 bits and each data word has 12 bits, so the word length is 2*(6+12) = 36 bits 46
Image of page 46
47
Image of page 47
Number of Caches SINGLE VS MULTILEVEL CACHES the typical system had a single cache MULTILEVEL CACHES Two/three cache on the same chip a two-level cache, with the internal cache designated as level 1 (L1) and the external cache designated as level 2 (L2). 48
Image of page 48
UNIFIED VS SPLIT CACHES Unified : a single cache used to store references to both data and instructions higher hit rate than split caches Split caches : one dedicated to instructions and one dedicated to data it eliminates contention for the cache between the instruction fetch/decode unit and the execution unit 49
Image of page 49
Example: Pentium 4, multilevel and split 50
Image of page 50
Example: ARM, single and Unified 51
Image of page 51

You've reached the end of your free preview.

Want to read all 51 pages?

  • Fall '19
  • Central processing unit, CPU cache

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

Stuck? We have tutors online 24/7 who can help you get unstuck.
A+ icon
Ask Expert Tutors You can ask You can ask You can ask (will expire )
Answers in as fast as 15 minutes