Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture-17 CH-05 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

COMPUTER ORGANIZATION AND DESIGN 6th

Edition
The Hardware/Software Interface

Chapter 5
Large and Fast:
Exploiting Memory
Hierarchy
§5.1 Introduction
Principle of Locality
◼ Programs access a small proportion of
their address space at any time
◼ Temporal locality
◼ Items accessed recently are likely to be
accessed again soon
◼ e.g., instructions in a loop, induction variables
◼ Spatial locality
◼ Items near those accessed recently are likely
to be accessed soon
◼ E.g., sequential instruction access, array data
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2
Taking Advantage of Locality
◼ Memory hierarchy
◼ Store everything on disk
◼ Copy recently accessed (and nearby)
items from disk to smaller DRAM memory
◼ Main memory
◼ Copy more recently accessed (and
nearby) items from DRAM to smaller
SRAM memory
◼ Cache memory attached to CPU

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3


Memory Technology
◼ Static RAM (SRAM)
◼ 0.5ns – 2.5ns, $500 – $1000 per GB
◼ Dynamic RAM (DRAM)
◼ 50ns – 70ns, $3 – $6 per GB
◼ Magnetic disk
◼ 5ms – 20ms, $0.01 – $0.02 per GB
◼ Ideal memory
◼ Access time of SRAM
◼ Capacity and cost/GB of disk

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4


Memory Hierarchy Levels
Memory hierarchy A structure that uses multiple levels of
memories; as the distance from the CPU increases, the size of the
memories and the access time both increase.
Memory Hierarchy Levels
◼ Block (aka line): unit of copying
◼ May be multiple words
◼ If accessed data is present in
upper level
◼ Hit: access satisfied by upper level
◼ Hit ratio: hits/accesses
◼ If accessed data is absent
◼ Miss: block copied from lower level
◼ Time taken: miss penalty
◼ Miss ratio: misses/accesses
= 1 – hit ratio
◼ Then accessed data supplied from
upper level

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6


Memory Hierarchy Levels
Memory Hierarchy Levels
Memory Hierarchy Levels
As you go further, capacity and latency increase

L1 data or
Registers Memory Disk
instruction L2 cache
1KB 1GB 80 GB
Cache 2MB
1 cycle 300 cycles 10M cycles
32KB 15 cycles
2 cycles

9
Important Terms
block The minimum unit of information that can be either present or
not present in the two-level hierarchy.

hit rate The fraction of memory accesses found in a cache.

miss rate The fraction of memory accesses not found in a


level of the memory hierarchy.

hit time The time required to access a level of the memory


hierarchy, including the time needed to determine whether
the access is a hit or a miss.

miss penalty The time required to fetch a block into a level of the
memory hierarchy from the lower level, including the time to
access the block, transmit it from one level to the other, and 10
insert it in the level that experienced the miss.
§5.3 The Basics of Caches
Cache Memory
◼ Cache memory
◼ The level of the memory hierarchy closest to
the CPU
◼ Given accesses X1, …, Xn–1, Xn

◼ How do we know if
the data is present?
◼ Where do we look?

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11


Direct Mapped Cache
◼ Location determined by address
◼ Direct mapped: only one choice
◼ (Block address) modulo (#Blocks in cache)

◼ #Blocks is a
power of 2
◼ Use low-order
address bits

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12


Tags and Valid Bits
◼ How do we know which particular block is
stored in a cache location?
◼ Store block address as well as the data
◼ Actually, only need the high-order bits
◼ Called the tag
◼ What if there is no data in a location?
◼ Valid bit: 1 = present, 0 = not present
◼ Initially 0

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 13


Cache Example
◼ 8-blocks, 1 word/block, direct mapped
◼ Initial state

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 14


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Miss 110

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 15


Cache Example
Word addr Binary addr Hit/miss Cache block
26 11 010 Miss 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 16


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Hit 110
26 11 010 Hit 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17


Cache Example
Word addr Binary addr Hit/miss Cache block
16 10 000 Miss 000
3 00 011 Miss 011
16 10 000 Hit 000

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 11 Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18


Cache Example
Word addr Binary addr Hit/miss Cache block
18 10 010 Miss 010

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 10 Mem[10010] Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19


Address Subdivision

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20


Block Size Considerations
◼ Larger blocks should reduce miss rate
◼ Due to spatial locality
◼ But in a fixed-sized cache
◼ Larger blocks  fewer of them
◼ More competition  increased miss rate
◼ Larger blocks  pollution
◼ Larger miss penalty
◼ Can override benefit of reduced miss rate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21

You might also like