Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DECO - Module 4.3 - Cache

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 20

MODULE IV

CACHE MEMORY
Module IV : CACHE MEMORY

 Cache Memory is a very-high speed memory which increases the speed


of processing.
 Cache memory is a small, high-speed RAM buffer located between the
CPU and main memory.
 Cache memory hold copy of the instructions (instruction cache) or Data
(Operand or Data cache) currently being used by the CPU.
CACHE MEMORY

 Processor is much faster than the main memory.


• As a result, the processor has to spend much of its time waiting
while instructions and data are being fetched from the main memory.

 Major obstacle towards achieving good performance.


• Speed of the main memory cannot be increased beyond a certain
point.

 Cache memory is an architectural arrangement which makes the


main memory appear faster to the processor than it really is.

Cache memory is based on the property of computer programs known as


“locality of reference”.
Locality of reference

Cache memory is based on the concept of locality of


reference.

 If active segments of a program are placed in a fast cache memory,


then the execution time can be reduced.
 Whenever an instruction or data is needed for the first time, it should
be brought into a cache. It will hopefully be used again repeatedly.
 The term “block” refers to a set of contiguous addresses locations of
some size.

4
Cache operation – overview

 CPU requests contents of memory.

 Check cache for this data.

 If present, get from cache (fast)

 If not present, read required block from main memory to cache

 Then deliver from cache to CPU

 Cache includes tags to identify which block of main memory is in each


cache slot

5
How data is copied into cache?

 Main memory and cache are both divided into blocks.

 When a memory address is generated, cache is searched first to see if


the required word exists there.
 When the requested word is not found in cache, the entire main
memory block in which the word resides is loaded into cache.
 This is possible because of the principle of locality—if a word was
just referenced, there is a good chance words in the same general
vicinity/location will soon be referenced as well.
Performance of CPU is measured by Hit Ratio.

Hit Ratio :The ratio of the total number of hits divided by the total CPU
accesses to memory (i.e. hits plus misses) is called Hit Ratio.

Hit Ratio = Total Number of Hits / (Total Number of Hits + Total Number of
Miss)
 There are fewer cache lines than main memory blocks, an algorithm is
needed for mapping main memory blocks into cache lines.
 There is also a need for a way to determine which main memory block
currently occupies a cache line.
 The choice of the mapping function dictates how the cache is
organized.
 Three techniques can be used:
1. Associative mapping
2. Direct (single or block mapping)
3. Set associative mapping
Example of cache memory

• We consider the above memory organization for the three mapping


procedures.
• Main memory can store 32K words of 12 bits each.
• Cache can store 512 of these words at any time.
Associative Mapping

 Associative mapping is fast and flexible technique as it uses an


associative memory.

 This memory is being accessed using its contents.

 This permits any location in cache to store any word from main
memory.
The steps of the Associative Mapping are:
1. A CPU address of 15 bits is put in the argument register and the
associative memory is searched for matching address.

A:If the address is found, 12-bit data is read and sent to CPU
B: If no match occurs, the main memory is accessed for word.

2. If all the blocks are already in use, it’s usually best to replace the least
recently used one, if if it hasn’t used it in a while, it won’t be needed
again anytime soon.
However, a fully associative cache is expensive to implement.
— Because there is no index field in the address anymore, the entire
address is used, increasing the total cache size.
— Data could be anywhere in the cache, so we must check the tag of
every cache block.
Associative mapping Cache
(octal digits)

11
Associative mapping advantages & disadvantages

 The main advantage of the associative-mapping technique is the efficient


use of the cache. This stems from the fact that there exists no restriction
on where to place incoming main memory blocks. Any unoccupied cache
block can potentially be used to receive those incoming main memory
blocks.
 The main disadvantage of the technique, is the hardware overhead
required to perform the associative search.

12
Direct Mapping

 The direct mapping technique is simple and inexpensive to implement.

General case:
2kwords -cache memory
2n words - main memory

The n-bit memory address is divided into two fields:


1. k bits for the index field
2. n - k bits for the tag field

For the considered memory organization, CPU address of 15 bits is divided


into 2 fields: 9 LSBs constitute the index field and remaining 6 bits form the
tag field
The direct mapping cache organization uses the n-bit address to access the
main memory and the k-bit index to access the cache
Direct Mapping
 The tag field of CPU address is compared with the associated tag in the
word read from the cache.
 If the tag-bits of CPU address is matched with the tag-bits of cache, then
there is a hit and the required data word is read from cache.
 If there is no match, then there is a miss and the required data word is
stored in main memory. It is then transferred from main memory to cache
memory with the new tag.
Direct mapping Cache Organization
(block size of 1 word )

Memory Memory data The word at address zero is presently stored in


Add.
the cache

index tag data

Main Memory Cache Memory


 Suppose the CPU wants to access the word at address 02000.

 Index address is 000, so it is used to access the cache.

 The two tags are then compared.

 The cache tag is 00 but the address tag is 02, which does not
produces a match.
 Therefore, the main memory is accessed and the data word
5670 is transferred to the CPU.

 The cache word at index address 000 is then replaced with a tag
of 02 and data of 5670.
16
Direct mapping Cache Organization
(block size of 8 words )
Index field is now divided into two parts: BLOCK FIELD & WORD FIELD
64 x 8 = 512

6 Bit 3-bit

 The tag field stored within the cache is


common to all eight words of the same block.
 Every time a miss occurs, an entire block of
eight words must be transferred from main
memory to cache memory.
 Takes extra time but the hit ratio will most
likely improve with a larger block size because
of the sequential nature of computer programs. 17
Direct Mapped Cache Advantage and Disadvantage

Advantage
 Simple
 Fast

Disadvantage
 Its main disadvantage is that the hit ratio may drop if 2 or more words with
same index but different tags are accessed repeatedly (cannot reside in
cache memory at the same time).

18
Problem 1

1. A main memory contains 8 words while the cache has only 4 words. Using
direct address mapping, identify the fields of the main memory address for the
cache mapping.
Solution:
Total memory words = 8 = 23
Require 3 bit for main memory address.
Total cache words = 4 = 22
Require 2 bit for cache address.

Main memory add. = 3 bits


TAG=1 Bit | Index=2 bits

19
Thank You

You might also like