[go: up one dir, main page]

0% found this document useful (0 votes)
18 views23 pages

3. Lecture 19 Basics of Cache

The document discusses cache memory, its structure, and how it operates within the memory hierarchy, particularly focusing on direct-mapped caches. It explains the importance of tags, valid bits, and cache misses, as well as different write strategies like write-through and write-back. Additionally, it covers measuring cache performance, average access time, and the impact of cache behavior on overall system performance.

Uploaded by

Shibly Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views23 pages

3. Lecture 19 Basics of Cache

The document discusses cache memory, its structure, and how it operates within the memory hierarchy, particularly focusing on direct-mapped caches. It explains the importance of tags, valid bits, and cache misses, as well as different write strategies like write-through and write-back. Additionally, it covers measuring cache performance, average access time, and the impact of cache behavior on overall system performance.

Uploaded by

Shibly Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

§5.

3 The Basics of Caches


Cache Memory
 Cache memory
 The level of the memory hierarchy closest to
the CPU
 Given accesses X1, …, Xn–1, Xn

 How do we know if
the data is present?
 Where do we look?
FIGURE 5.7 The cache just before and just after a
reference to a word Xn that is not initially in the cache.
This reference causes a miss that forces the cache to fetch
Xn from memory and insert it into the cache.

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17


Direct Mapped Cache
A cache structure in which each memory location
is Location
 mapped determined
to exactly one location by address
in the cache.

 Direct
FIGURE 5.8 mapped: only one
A direct-mapped cachechoice
with eight entries
showing the addresses of memory words between 0 and
 (Block
31 that address)
map to the modulo
same cache (#Blocks in cache)
locations.

 #Blocks
Because there are is a
eight words
in the cache, an address X
maps power
to the of 2
direct-mapped
cache word X modulo 8. That
 Use
is, the low-order
low-order log2(8) = 3 bits
are usedaddress
as the cachebits
index.

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18


Tags and Valid Bits
 How do we know which particular block is
stored in a cache location?
 Store block address as well as the data
 Actually, only need the high-order bits
 Called the tag
 What if there is no data in a location?
 Valid bit: 1 = present, 0 = not present
 Initially 0

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19


Cache Example
 8-blocks, 1 word/block, direct mapped
A sequence of eight memory references to
 Initial state an empty eight-block cache
22, 26, 22, 26, 16, 3, 16, 18

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Miss 110

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21


Cache Example
Word addr Binary addr Hit/miss Cache block
26 11 010 Miss 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Hit 110
26 11 010 Hit 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23


Cache Example
Word addr Binary addr Hit/miss Cache block
16 10 000 Miss 000
3 00 011 Miss 011
16 10 000 Hit 000

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 11 Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24


Cache Example
Word addr Binary addr Hit/miss Cache block
18 10 010 Miss 010

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 10 Mem[10010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25


Address Subdivision

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26


Example: Larger Block Size
 64 blocks, 16 bytes/block
 To what block number does address 1200
map? Block address = Byte address / Bytes per block

 Block address = 1200/16 = 75


 Block number = 75 modulo 64 = 11

31 10 9 4 3 0
Tag Index Offset
22 bits 6 bits 4 bits

In fact, this block maps all addresses between 1200 and 1215

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27


Block Size Considerations
 Larger blocks should reduce miss rate
 Due to spatial locality
 But in a fixed-sized cache
 Larger blocks ⇒ fewer of them
 More competition ⇒ increased miss rate
 Larger blocks ⇒ pollution
 Larger miss penalty
 Can override benefit of reduced miss rate
 Early restart and critical-word-first can help

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28


Block Size Considerations

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28


A request for data from the cache
Cache Misses that cannot be filled because the
data is not present in the cache.
 On cache hit, CPU proceeds normally
 On cache miss
 Stall the CPU pipeline
 Fetch block from next level of hierarchy
 Instruction cache miss
 Restart instruction fetch
 Data cache miss
 Complete data access

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29


Write-Through
 On data-write hit, could just update the block in
cache
 But then cache and memory would be inconsistent
 Write through: also update memory
 But makes writes take longer
 e.g., if base CPI = 1, 10% of instructions are stores,
write to memory takes 100 cycles
 Effective CPI = 1 + 0.1×100 = 11
 Solution: write buffer
 Holds data waiting to be written to memory
 CPU continues immediately
 Only stalls on write if write buffer is already full

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30


Write-Back
 Alternative: On data-write hit, just update
the block in cache
 Keep track of whether each block is dirty
 When a dirty block is replaced
 Write it back to memory
 Can use a write buffer to allow replacing block
to be read first

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31


Example: Intrinsity FastMATH
 Embedded MIPS processor
 12-stage pipeline
 Instruction and data access on each cycle
 Split cache: separate I-cache and D-cache
 Each 16KB: 256 blocks × 16 words/block
 D-cache: write-through or write-back
 SPEC2000 miss rates
 I-cache: 0.4%
 D-cache: 11.4%
 Weighted average: 3.2%

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 33


Example: Intrinsity FastMATH

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 34


Main Memory Supporting Caches
 Use DRAMs for main memory
 Fixed width (e.g., 1 word)
 Connected by fixed-width clocked bus
 Bus clock is typically slower than CPU clock
 Example cache block read
 1 bus cycle for address transfer
 15 bus cycles per DRAM access
 1 bus cycle per data transfer
 For 4-word block, 1-word-wide DRAM
 Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles
 Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35


§5.4 Measuring and Improving Cache Performance
Measuring Cache Performance
 Components of CPU time
 Program execution cycles
 Includes cache hit time
 Memory stall cycles
 Mainly from cache misses
 With simplifying assumptions:
Memory stall cycles
Memory accesses
= × Miss rate × Miss penalty
Program
Instructions Misses
= × × Miss penalty
Program Instruction
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 36
Cache Performance Example
 Given
 I-cache miss rate = 2%
 D-cache miss rate = 4%
 Miss penalty = 100 cycles
 Base CPI (ideal cache) = 2
 Load & stores are 36% of instructions
 Miss cycles per instruction
 I-cache: 0.02 × 100 = 2
 D-cache: 0.36 × 0.04 × 100 = 1.44
 Actual CPI = 2 + 2 + 1.44 = 5.44
 Ideal CPU is 5.44/2 =2.72 times faster

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 37


Average Access Time
 Hit time is also important for performance
 Average memory access time (AMAT)
 AMAT = Hit time + Miss rate × Miss penalty
 Example
 CPU with 1ns clock, hit time = 1 cycle, miss
penalty = 20 cycles, I-cache miss rate = 5%
 AMAT = 1 + 0.05 × 20 = 2ns
 2 cycles per instruction

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38


Performance Summary
 When CPU performance increased
 Miss penalty becomes more significant
 Decreasing base CPI
 Greater proportion of time spent on memory
stalls
 Increasing clock rate
 Memory stalls account for more CPU cycles
 Can’t neglect cache behavior when
evaluating system performance

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 39

You might also like