Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
1 views

Cache - Memory Mapping

Cache memory mapping in COA

Uploaded by

sevenhills4u
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Cache - Memory Mapping

Cache memory mapping in COA

Uploaded by

sevenhills4u
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Cache Memory Mapping Techniques

Dr. Edukondalu Chappidi

Dr. Edukondalu Chappidi Cache Memory


Levels of memory

I Register - It is a type of memory in which data is stored and ac-


cepted that is immediately stored in the CPU. Most commonly
used registers are accumulator, program counter, address reg-
ister etc.
I Cache memory - Data is temporarily stored in cache for faster
access. Cache has higher capacity and takes longer access time
as compared to the registers.
I Main Memory or RAM - It is memory on which computer
works currently. It is volatile memory hence when power is off
data no longer stays in this memory.
I Secondary Memory - It is external memory which is not as fast
as main memory but data stays permanently in this memory.

Dr. Edukondalu Chappidi Cache Memory


Memory Hierarchy Design

Dr. Edukondalu Chappidi Cache Memory


Cache Memory

I Cache memory is an extremely fast memory type that acts as


a buffer between RAM and the CPU.
I Cache Memory holds frequently requested data and instructions
so that they are quickly available to the CPU when needed.
I Cache memory is costlier than main memory or disk memory
but more economical than CPU registers.
I Cache Memory is used to speed up and synchronize with a
high-speed CPU.

Dr. Edukondalu Chappidi Cache Memory


Cache Memory

Dr. Edukondalu Chappidi Cache Memory


Cache Memory Mapping
• Again cache memory is a small and fast memory
between CPU and main memory
• A block of words have to be brought in and out of the
cache memory continuously
• Performance of the cache memory mapping function
is key to the speed
• There are a number of mapping techniques
– Direct mapping
– Associative mapping
– Set associative - mapping
Direct Mapping Technique – No. 1
• Simplest way of mapping
• Main memory is divided in blocks
• Block j of the main memory is mapped onto block j modulo 128
of the cache – consider a cache of 128 blocks of 16 words each
Cache • Consider a memory of 64K
tag Block 0 words divided into 4096
tag Block 1
blocks
Where blocks 0, 128,
tag Block 127 256, … 3968 should be
mapped to?
Tag Block Word
Where blocks 126, 254,
5 7 4 382, … 4094 should be
Main memory address mapped to?
Direct Mapping Technique (Continued)

• Mapping process
– Use tag to see if a desired word is in cache
– It there is no match, the block containing the required word
must first be read from the memory
– For example: MOVE $A815, DO
10101 0000001 0101
Tag Block # Word
a. Check if cache has tag 10101 for block 1
match -> hit; different -> miss, load the corresponding block
b. Access word 5 of the block
Direct Mapping Technique (Continued)
• Advantage
– simplest replacement algorithm
• Disadvantage
– not flexible
– there is contention problem even when cache is not full
• For example, block 0 and block 128 both take only
block 0 of cache:
– 0 modulo 128 = 0
– 128 modulo 128 = 0
– If both blocks 0 and 128 of the main memory
are used a lot, it will be very slow
Associative Mapping Technique – No. 2
• Any block can go anywhere in cache
12
• 4095 blocks -> 4095 tag = 2 -> 12 bit tag
Cache Main Memory
tag Block 0 tag Block 0
tag Block 1 tag Block 1

tag Block 127

Tag Word tag Block 4095


12 4
Main memory address
Associative Mapping Technique
(continued)
• Advantage
– Any empty block in cache can be used, flexible
– Must check all tags to check for a hit, expensive
(parallel algorithm has been developed to speed
up the process)
• What is the next technique?
– Something between direct mapping and
associative mapping
Set Associative Mapping Technique – No. 3

• Comprise between direct mapping and associative


mapping
• Block in main memory maps to a set of blocks in
cache – direct mapping
• Can map to any block within the set
• E.g. use 6 bits for tag = 26 = 64 tags
6
6 bits for set = 2 = 64 sets
Set Associative Mapping Technique
(continued)
• Memory Address Tag Set Word

6 6 4
Cache
Set 0 tag Block 0 • The blocks in cache are divided
tag Block 1
into 64 sets and there are two blocks
tag Block 2
Set 1 in each set
tag Block 3
• How the blocks in the main memory
be mapped into cache?
• Main memory blocks 0, 64, 128,
tag Block 126 4032 maps to set 0 and can occupy
Set 63 Block 127 either of the two positions
tag
Set Associative Mapping Technique
(continued)
• A set could have one block -> direct mapping; 128 blocks ->
associative mapping
• k blocks per set is referred to as k-way set-associative mapping
Main memory Block 0
Tag 0
Block 63
Block 64
Tag 1 Set 0
Block 127

Block 4032
Tag 63
Block 4095
Cache Memory Details
• Block size
– Depends on how memory is addressed (byte, word, or
long word) and accessed (word at a time)
– 8-16 quite reasonable
• 68040 – 16 bytes per block
• Pentium IV – 64 bytes per block
– Always work with 1 block at a time
– How many blocks in cache?
• No of words in cache divided by number of words
11 4
per block – e.g. 2 k words, 16-word block: 2 / 2 =
2 7 = 128 blocks

You might also like