Cache Memory
Cache Memory
Cache Memory
Level 1 or Register –
It is a type of memory in which data is stored and
accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program
counter, address register etc.
Level 2 or Cache memory –
It is the fastest memory which has faster access time
where data is temporarily stored for faster access.
Level 3 or Main Memory –
It is memory on which computer works currently. It
is small in size and once power is off data no longer
stays in this memory.
Level 4 or Secondary Memory –
It is external memory which is not as fast as main
memory but data stays permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in
main memory, it first checks for a corresponding entry in
the cache.
If the processor finds that the memory location is in
the cache, a cache hit has occurred and data is read
from cache
If the processor does not find the memory location
in the cache, a cache miss has occurred. For a cache
miss, the cache allocates a new entry and copies in
data from main memory, then the request is fulfilled
from the contents of the cache.
The performance of cache memory is frequently
measured in terms of a quantity called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache
block size, higher associativity, reduce miss rate, reduce
miss penalty, and reduce the time to hit in the cache.
Cache Mapping:
There are three different types of mapping used for the
purpose of cache memory which are as follows: Direct
mapping, Associative mapping, and Set-Associative
mapping. These are explained below.
1. Direct Mapping –
The simplest technique, known as direct mapping,
maps each block of main memory into only one
possible cache line. or
In Direct mapping, assign each memory block to a
specific line in the cache. If a line is previously taken
up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is
split into two parts index field and a tag field. The
cache is used to store the tag field whereas the rest
is stored in the main memory. Direct mapping`s
performance is directly proportional to the Hit ratio.
2. i = j modulo m
3. where
4. i=cache line number
5. j= main memory block number
m=number of lines in the cache
For purposes of cache access, each main memory
address can be viewed as consisting of three fields. The
least significant w bits identify a unique word or byte
within a block of main memory. In most contemporary
machines, the address is at the byte level. The remaining
s bits specify one of the 2s blocks of main memory. The
cache logic interprets these s bits as a tag of s-r bits
(most significant portion) and a line field of r bits. This
latter field identifies one of the m=2r lines of the cache.
6. Associative Mapping –
In this type of mapping, the associative memory is
used to store content and addresses of the memory
word. Any block can go into any line of the cache.
This means that the word id bits are used to identify
which word in the block is needed, but the tag
becomes all of the remaining bits. This enables the
placement of any word at any place in the cache
memory. It is considered to be the fastest and the
most flexible mapping form.
7. Set-associative Mapping –
This form of mapping is an enhanced form of direct
mapping where the drawbacks of direct mapping are
removed. Set associative addresses the problem of
possible thrashing in the direct mapping method. It
does this by saying that instead of having exactly one
line that a block can map to in the cache, we will
group a few lines together creating a set. Then a
block in memory can map to any one of the lines of a
specific set..Set-associative mapping allows that
each word that is present in the cache can have two
or more words in the main memory for the same
index address. Set associative cache mapping
combines the best of direct and associative cache
mapping techniques.
In this case, the cache consists of a number of sets, each
of which consists of a number of lines. The relationships
are
m=v*k
i= j mod v
where
i=cache set number
j=main memory block number
v=number of sets
m=number of lines in the cache number of sets
k=number of lines in each set
Application of Cache Memory –
1. Usually, the cache memory can store a
reasonable number of blocks at any given time,
but this number is small compared to the total
number of blocks in the main memory.
2. The correspondence between the main memory
blocks and those in the cache is specified by a
mapping function.
Types of Cache –
Primary Cache –
A primary cache is always located on the
processor chip. This cache is small and its access
time is comparable to that of processor
registers.
Secondary Cache –
Secondary cache is placed between the primary
cache and the rest of the memory. It is referred
to as the level 2 (L2) cache. Often, the Level 2
cache is also housed on the processor chip.
Locality of reference –
Since size of cache memory is less as compared to main
memory. So to check which part of main memory should
be given priority and loaded in cache is decided based on
locality of reference.
Types of Locality of reference