L3-4_cache_2010

L3-4_cache_2010 - ComputerArchitecture CacheMemory...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Computer Architecture 2010 – Caches 1 Lihu Rappoport and Adi Yoaz Computer Architecture Computer Architecture         Cache Memory Cache Memory
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Computer Architecture 2010 – Caches 2 Processor – Memory Gap Processor – Memory Gap Processor-Memory Performance Gap: (grows 50% / year) 1 10 100 1000 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 DRAM CPU Performance Time
Background image of page 2
Computer Architecture 2010 – Caches 3 Memory Trade-Offs Memory Trade-Offs Large (dense) memories are slow Fast memories are small, expensive and consume high power Goal: give the processor a feeling that it has a memory which is  large (dense), fast, consumes low power, and cheap Solution: a Hierarchy of memories Speed:        Fastest        Slowest Size:       Smallest        Biggest Cost:        Highest        Lowest Power:       Highest        Lowest L1 Cache CPU L2 Cache L3 Cache Memory (DRAM)
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Computer Architecture 2010 – Caches 4 Why Hierarchy Works Why Hierarchy Works Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon Example: code and variables in loops      Keep recently accessed data closer to the processor Spatial Locality (Locality in Space): If an item is referenced, nearby items tend to be referenced  soon Example: scanning an array     Move contiguous blocks closer to the processor Locality + smaller HW is faster + Amdahl’s law    memory hierarchy
Background image of page 4
Computer Architecture 2010 – Caches 5 Memory Hierarchy: Terminology Memory Hierarchy: Terminology For each memory level define the following Hit: data appears in the memory level Hit Rate: the fraction of accesses found in that level Hit Time: time to access the memory level  includes also the time to determine hit/miss Miss: need to retrieve data from next level Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level +  Time to deliver the block the processor Average memory-access time =  teffective = (Hit time  ×  Hit Rate) + (Miss Time  ×   Miss rate)               = (Hit time  ×  Hit Rate) + (Miss Time  ×  (1-  Hit rate))
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Computer Architecture 2010 – Caches 6 Effective Memory Access Time Effective Memory Access Time Cache – holds a subset of the memory  Hopefully – the subset being used now Effective memory access time t effective  = (t cache   ×  Hit Rate) + (t mem   ×  (1 – Hit rate)) t mem  includes the time it takes to detect a cache miss Example Assume t cache  = 10 nsec and t mem  = 100 nsec  Hit Rate eff  (nsec)     0 100   50      55   90      20   99      10.9   99.9      10.1
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 47

L3-4_cache_2010 - ComputerArchitecture CacheMemory...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online