[go: up one dir, main page]

0% found this document useful (0 votes)
16 views52 pages

Chapter 05

Chapter 5 of 'Computer Organization and Design' discusses the principles of locality in memory access, highlighting temporal and spatial locality. It explains the memory hierarchy, detailing how data is managed across various memory types including SRAM, DRAM, and magnetic disks, as well as cache memory mechanisms. The chapter also covers cache performance metrics, such as hit/miss ratios, and the impact of different cache architectures on system efficiency.

Uploaded by

smithmurphy1998
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views52 pages

Chapter 05

Chapter 5 of 'Computer Organization and Design' discusses the principles of locality in memory access, highlighting temporal and spatial locality. It explains the memory hierarchy, detailing how data is managed across various memory types including SRAM, DRAM, and magnetic disks, as well as cache memory mechanisms. The chapter also covers cache performance metrics, such as hit/miss ratios, and the impact of different cache architectures on system efficiency.

Uploaded by

smithmurphy1998
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 52

Computer Organization and Design5

Edition
th

The Hardware/Software Interface

Chapter 5
Large and Fast: Exploiting
Memory Hierarchy
§5.1 Introduction
Principle of Locality
 Programs access a small proportion of
their address space at any time
 Temporal locality
 Items accessed recently are likely to be
accessed again soon
 e.g., instructions in a loop, induction variables
 Spatial locality
 Items near those accessed recently are likely
to be accessed soon
 E.g., sequential instruction access, array data
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2
Taking Advantage of Locality
 Memory hierarchy
 Store everything on disk
 Copy recently accessed (and nearby)
items from disk to smaller DRAM memory
 Main memory
 Copy more recently accessed (and
nearby) items from DRAM to smaller
SRAM memory
 Cache memory attached to CPU

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3


Memory Hierarchy Levels
 Block (aka line): unit of copying
 May be multiple words
 If accessed data is present in
upper level
 Hit: access satisfied by upper level

Hit ratio: hits/accesses
 If accessed data is absent
 Miss: block copied from lower level

Time taken: miss penalty

Miss ratio: misses/accesses
= 1 – hit ratio
 Then accessed data supplied from
upper level

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4


§5.2 Memory Technologies
Memory Technology
 Static RAM (SRAM)
 0.5ns – 2.5ns, $2000 – $5000 per GB
 Dynamic RAM (DRAM)
 50ns – 70ns, $20 – $75 per GB
 Magnetic disk
 5ms – 20ms, $0.20 – $2 per GB
 Ideal memory
 Access time of SRAM
 Capacity and cost/GB of disk

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5


§6.4 Flash Storage
Flash Storage
 Nonvolatile semiconductor storage
 100× – 1000× faster than disk
 Smaller, lower power, more robust
 But more $/GB (between disk and DRAM)

Chapter 6 — Storage and Other I/O Topics — 11


Flash Types
 NOR flash: bit cell like a NOR gate
 Random read/write access
 Used for instruction memory in embedded systems
 NAND flash: bit cell like a NAND gate
 Denser (bits/area), but block-at-a-time access
 Cheaper per GB
 Used for USB keys, media storage, …
 Flash bits wears out after 1000’s of accesses
 Not suitable for direct RAM or disk replacement
 Wear leveling: remap data to less used blocks

Chapter 6 — Storage and Other I/O Topics — 12


§6.3 Disk Storage
Disk Storage
 Nonvolatile, rotating magnetic storage

Chapter 6 — Storage and Other I/O Topics — 13


Disk Sectors and Access
 Each sector records
 Sector ID
 Data (512 bytes, 4096 bytes proposed)
 Error correcting code (ECC)

Used to hide defects and recording errors
 Synchronization fields and gaps
 Access to a sector involves
 Queuing delay if other accesses are pending
 Seek: move the heads
 Rotational latency
 Data transfer
 Controller overhead

Chapter 6 — Storage and Other I/O Topics — 14


§5.3 The Basics of Caches
Cache Memory
 Cache memory
 The level of the memory hierarchy closest to
the CPU
 Given accesses X1, …, Xn–1, Xn

 How do we know if
the data is present?
 Where do we look?

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17


Direct Mapped Cache
 Location determined by address
 Direct mapped: only one choice
 (Block address) modulo (#Blocks in cache)

 #Blocks is a
power of 2
 Use low-order
address bits

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18


Tags and Valid Bits
 How do we know which particular block is
stored in a cache location?
 Store block address as well as the data
 Actually, only need the high-order bits
 Called the tag
 What if there is no data in a location?
 Valid bit: 1 = present, 0 = not present
 Initially 0

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19


Cache Example
 8-blocks, 1 word/block, direct mapped
 Initial state

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Miss 110

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21


Cache Example
Word addr Binary addr Hit/miss Cache block
26 11 010 Miss 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Hit 110
26 11 010 Hit 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23


Cache Example
Word addr Binary addr Hit/miss Cache block
16 10 000 Miss 000
3 00 011 Miss 011
16 10 000 Hit 000

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 11 Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24


Cache Example
Word addr Binary addr Hit/miss Cache block
18 10 010 Miss 010

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 10 Mem[10010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25


Address Subdivision

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26


Example: Larger Block Size
 64 blocks, 16 bytes/block
 To what block number does address 1200
map?
 Block address = 1200/16 = 75
 Block number = 75 modulo 64 = 11
31 10 9 4 3 0
Tag Index Offset
22 bits 6 bits 4 bits

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27


Block Size Considerations
 Larger blocks should reduce miss rate
 Due to spatial locality
 But in a fixed-sized cache
 Larger blocks  fewer of them

More competition  increased miss rate
 Larger blocks  pollution
 Larger miss penalty
 Can override benefit of reduced miss rate
 Early restart and critical-word-first can help

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28


Cache Misses
 On cache hit, CPU proceeds normally
 On cache miss
 Stall the CPU pipeline
 Fetch block from next level of hierarchy
 Instruction cache miss

Restart instruction fetch
 Data cache miss

Complete data access

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29


Write-Through
 On data-write hit, could just update the block in
cache
 But then cache and memory would be inconsistent
 Write through: also update memory
 But makes writes take longer
 e.g., if base CPI = 1, 10% of instructions are stores,
write to memory takes 100 cycles

Effective CPI = 1 + 0.1×100 = 11
 Solution: write buffer
 Holds data waiting to be written to memory
 CPU continues immediately

Only stalls on write if write buffer is already full

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30


Write-Back
 Alternative: On data-write hit, just update
the block in cache
 Keep track of whether each block is dirty
 When a dirty block is replaced
 Write it back to memory
 Can use a write buffer to allow replacing block
to be read first

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31


Main Memory Supporting Caches
 Use DRAMs for main memory
 Fixed width (e.g., 1 word)
 Connected by fixed-width clocked bus

Bus clock is typically slower than CPU clock
 Example cache block read
 1 bus cycle for address transfer
 15 bus cycles per DRAM access
 1 bus cycle per data transfer
 For 4-word block, 1-word-wide DRAM
 Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles
 Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35


§5.4 Measuring and Improving Cache Performance
Measuring Cache Performance
 Components of CPU time

Program execution cycles

Includes cache hit time

Memory stall cycles

Mainly from cache misses
 With simplifying assumptions:
Memory stall cycles
Memory accesses
 Miss rate Miss penalty
Program
Instructions Misses
  Miss penalty
Program Instruction
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 36
Cache Performance Example
 Given

I-cache miss rate = 2%

D-cache miss rate = 4%

Miss penalty = 100 cycles

Base CPI (ideal cache) = 2

Load & stores are 36% of instructions
 Miss cycles per instruction

I-cache: 0.02 × 100 = 2

D-cache: 0.36 × 0.04 × 100 = 1.44
 Actual CPI = 2 + 2 + 1.44 = 5.44

Ideal CPU is 5.44/2 =2.72 times faster

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 37


Average Access Time
 Hit time is also important for performance
 Average memory access time (AMAT)
 AMAT = Hit time + Miss rate × Miss penalty
 Example
 CPU with 1ns clock, hit time = 1 cycle, miss
penalty = 20 cycles, I-cache miss rate = 5%
 AMAT = 1 + 0.05 × 20 = 2ns

2 cycles per instruction

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38


Performance Summary
 When CPU performance increased
 Miss penalty becomes more significant
 Decreasing base CPI
 Greater proportion of time spent on memory
stalls
 Increasing clock rate
 Memory stalls account for more CPU cycles
 Can’t neglect cache behavior when
evaluating system performance

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 39


Associative Caches
 Fully associative
 Allow a given block to go in any cache entry
 Requires all entries to be searched at once
 Comparator per entry (expensive)
 n-way set associative
 Each set contains n entries
 Block number determines which set

(Block number) modulo (#Sets in cache)
 Search all entries in a given set at once
 n comparators (less expensive)
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 40
Associative Cache Example

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 41


Spectrum of Associativity
 For a cache with 8 entries

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 42


Associativity Example
 Compare 4-block caches
 Direct mapped, 2-way set associative,
fully associative
 Block access sequence: 0, 8, 0, 6, 8
 Direct mapped
Block Cache Hit/miss Cache content after access
address index 0 1 2 3

0 0 miss Mem[0]
8 0 miss Mem[8]
0 0 miss Mem[0]
6 2 miss Mem[0] Mem[6]
8 0 miss Mem[8] Mem[6]

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 43


Associativity Example
 2-way set associative
Block Cache Hit/miss Cache content after access
address index Set 0 Set 1

0 0 miss Mem[0]
8 0 miss Mem[0] Mem[8]
0 0 hit Mem[0] Mem[8]
6 0 miss Mem[0] Mem[6]
8 0 miss Mem[8] Mem[6]

 Fully associative
Block Hit/miss Cache content after access
address
0 miss Mem[0]
8 miss Mem[0] Mem[8]
0 hit Mem[0] Mem[8]
6 miss Mem[0] Mem[8] Mem[6]
8 hit Mem[0] Mem[8] Mem[6]

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 44


How Much Associativity
 Increased associativity decreases miss
rate
 But with diminishing returns
 Simulation of a system with 64KB
D-cache, 16-word blocks, SPEC2000
 1-way: 10.3%
 2-way: 8.6%
 4-way: 8.3%
 8-way: 8.1%

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 45


Set Associative Cache Organization

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 46


Replacement Policy
 Direct mapped: no choice
 Set associative

Prefer non-valid entry, if there is one

Otherwise, choose among entries in the set
 Least-recently used (LRU)

Choose the one unused for the longest time

Simple for 2-way, manageable for 4-way, too hard
beyond that
 Random

Gives approximately the same performance
as LRU for high associativity

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 47


Multilevel Caches
 Primary cache attached to CPU
 Small, but fast
 Level-2 cache services misses from
primary cache
 Larger, slower, but still faster than main
memory
 Main memory services L-2 cache misses
 Some high-end systems include L-3 cache

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 48


Multilevel Cache Example
 Given
 CPU base CPI = 1, clock rate = 4GHz
 Miss rate/instruction = 2%
 Main memory access time = 100ns
 With just primary cache
 Miss penalty = 100ns/0.25ns = 400 cycles
 Effective CPI = 1 + 0.02 × 400 = 9

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 49


Example (cont.)
 Now add L-2 cache
 Access time = 5ns
 Global miss rate to main memory = 0.5%
 Primary miss with L-2 hit
 Penalty = 5ns/0.25ns = 20 cycles
 Primary miss with L-2 miss
 Extra penalty = 500 cycles
 CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4
 Performance ratio = 9/3.4 = 2.6
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 50
Multilevel Cache Considerations
 Primary cache
 Focus on minimal hit time
 L-2 cache
 Focus on low miss rate to avoid main memory
access
 Hit time has less overall impact
 Results
 L-1 cache usually smaller than a single cache
 L-1 block size smaller than L-2 block size

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 51


Interactions with Advanced CPUs
 Out-of-order CPUs can execute
instructions during cache miss
 Pending store stays in load/store unit
 Dependent instructions wait in reservation
stations

Independent instructions continue
 Effect of miss depends on program data
flow
 Much harder to analyse
 Use system simulation
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 52
Interactions with Software
 Misses depend on
memory access
patterns
 Algorithm behavior
 Compiler

optimization for
memory access

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 53


Software Optimization via Blocking
 Goal: maximize accesses to data before it
is replaced
 Consider inner loops of DGEMM:

for (int j = 0; j < n; ++j)


{
double cij = C[i+j*n];
for( int k = 0; k < n; k++ )
cij += A[i+k*n] * B[k+j*n];
C[i+j*n] = cij;
}

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 54


DGEMM Access Pattern
 C, A, and B arrays

older accesses
new accesses

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 55


Cache Blocked DGEMM
1 #define BLOCKSIZE 32
2 void do_block (int n, int si, int sj, int sk, double *A, double
3 *B, double *C)
4 {
5 for (int i = si; i < si+BLOCKSIZE; ++i)
6 for (int j = sj; j < sj+BLOCKSIZE; ++j)
7 {
8 double cij = C[i+j*n];/* cij = C[i][j] */
9 for( int k = sk; k < sk+BLOCKSIZE; k++ )
10 cij += A[i+k*n] * B[k+j*n];/* cij+=A[i][k]*B[k][j] */
11 C[i+j*n] = cij;/* C[i][j] = cij */
12 }
13 }
14 void dgemm (int n, double* A, double* B, double* C)
15 {
16 for ( int sj = 0; sj < n; sj += BLOCKSIZE )
17 for ( int si = 0; si < n; si += BLOCKSIZE )
18 for ( int sk = 0; sk < n; sk += BLOCKSIZE )
19 do_block(n, si, sj, sk, A, B, C);
20 }

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 56


§5.5 Dependable Memory Hierarchy
Dependability
Service accomplishment
Service delivered
as specified
 Fault: failure of a
component
Restoration Failure  May or may not lead
to system failure

Service interruption
Deviation from
specified service

Chapter 6 — Storage and Other I/O Topics — 58


Dependability Measures
 Reliability: mean time to failure (MTTF)
 Service interruption: mean time to repair (MTTR)
 Mean time between failures
 MTBF = MTTF + MTTR
 Availability = MTTF / (MTTF + MTTR)
 Improving Availability
 Increase MTTF: fault avoidance, fault tolerance, fault
forecasting
 Reduce MTTR: improved tools and processes for
diagnosis and repair

Chapter 6 — Storage and Other I/O Topics — 59


The Hamming SEC Code
 Hamming distance
 Number of bits that are different between two
bit patterns
 Minimum distance = 2 provides single bit
error detection
 E.g. parity code
 Minimum distance = 3 provides single
error correction, 2 bit error detection

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 60


Encoding SEC
 To calculate Hamming code:
 Number bits from 1 on the left
 All bit positions that are a power 2 are parity
bits
 Each parity bit checks certain data bits:

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 61


Decoding SEC
 Value of parity bits indicates which bits are
in error
 Use numbering from encoding procedure
 E.g.

Parity bits = 0000 indicates no error

Parity bits = 1010 indicates bit 10 was flipped

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 62


SEC/DEC Code
 Add an additional parity bit for the whole word
(pn)
 Make Hamming distance = 4
 Decoding:
 Let H = SEC parity bits
 H even, pn even, no error
 H odd, pn odd, correctable single bit error
 H even, pn odd, error in pn bit
 H odd, pn even, double error occurred
 Note: ECC DRAM uses SEC/DEC with 8 bits
protecting each 64 bits
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 63

You might also like