Lecture-17 CH-05 1
Lecture-17 CH-05 1
Lecture-17 CH-05 1
Edition
The Hardware/Software Interface
Chapter 5
Large and Fast:
Exploiting Memory
Hierarchy
§5.1 Introduction
Principle of Locality
◼ Programs access a small proportion of
their address space at any time
◼ Temporal locality
◼ Items accessed recently are likely to be
accessed again soon
◼ e.g., instructions in a loop, induction variables
◼ Spatial locality
◼ Items near those accessed recently are likely
to be accessed soon
◼ E.g., sequential instruction access, array data
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2
Taking Advantage of Locality
◼ Memory hierarchy
◼ Store everything on disk
◼ Copy recently accessed (and nearby)
items from disk to smaller DRAM memory
◼ Main memory
◼ Copy more recently accessed (and
nearby) items from DRAM to smaller
SRAM memory
◼ Cache memory attached to CPU
L1 data or
Registers Memory Disk
instruction L2 cache
1KB 1GB 80 GB
Cache 2MB
1 cycle 300 cycles 10M cycles
32KB 15 cycles
2 cycles
9
Important Terms
block The minimum unit of information that can be either present or
not present in the two-level hierarchy.
miss penalty The time required to fetch a block into a level of the
memory hierarchy from the lower level, including the time to
access the block, transmit it from one level to the other, and 10
insert it in the level that experienced the miss.
§5.3 The Basics of Caches
Cache Memory
◼ Cache memory
◼ The level of the memory hierarchy closest to
the CPU
◼ Given accesses X1, …, Xn–1, Xn
◼ How do we know if
the data is present?
◼ Where do we look?
◼ #Blocks is a
power of 2
◼ Use low-order
address bits