Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Cache memory,Virtual memory and Auxiliary memory notes

Cache memory is a fast storage type that serves as a buffer between RAM and the CPU, holding frequently accessed data to reduce access time. It utilizes mapping functions and replacement algorithms to manage data, with mechanisms for handling cache hits and misses. Virtual memory complements cache memory by allowing larger programs to run by swapping data between main memory and secondary storage as needed.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Cache memory,Virtual memory and Auxiliary memory notes

Cache memory is a fast storage type that serves as a buffer between RAM and the CPU, holding frequently accessed data to reduce access time. It utilizes mapping functions and replacement algorithms to manage data, with mechanisms for handling cache hits and misses. Virtual memory complements cache memory by allowing larger programs to run by swapping data between main memory and secondary storage as needed.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Cache Memory

Cache memory is an extremely fast memory


type that acts as a buffer between RAM and the
CPU. It holds frequently requested data and
instructions so that they are immediately
available to the CPU when needed. Cache
memory is used to reduce the average time to
access data from the Main memory.
Cache memories

Main
Processor Cache memory

• Processor issues a Read request, a block of words is transferred from the


main memory to the cache, one word at a time.
• Subsequent references to the data in this block of words are found in the
cache.
• At any given time, only some blocks in the main memory are held in the
cache. Which blocks in the main memory are in the cache is determined
by a “mapping function”.
• When the cache is full, and a block of words needs to be transferred
from the main memory, some block of words in the cache must be
replaced. This is determined by a “replacement algorithm”.
Cache hit
• The processor issues Read and Write requests in the same manner.

• If the data is in the cache it is called a Read or Write hit.

• Read hit:
▪ The data is obtained from the cache.

• Write hit:
▪ Cache has a replica of the contents of the main memory.
▪ Contents of the cache and the main memory may be updated
simultaneously. This is the write-through protocol.
▪ Update the contents of the cache, and mark it as updated by setting a bit
known as the dirty bit or modified bit. The contents of the main memory
are updated when this block is replaced. This is write-back or copy-back
protocol.
Cache miss
• If the data is not present in the cache, then a Read miss or Write miss occurs.
• Read miss:
▪ Block of words containing this requested word is transferred from the memory.
▪ After the block is transferred, the desired word is forwarded to the processor.
▪ The desired word may also be forwarded to the processor as soon as it is
transferred without waiting for the entire block to be transferred. This is called
load-through or early-restart.

• Write-miss:
▪ Write-through protocol is used, then the contents of the main memory are
updated directly.
▪ If write-back protocol is used, the block containing the
addressed word is first brought into the cache. The desired word
is overwritten with new information.
Mapping functions
 Mapping functions determine how memory
blocks are placed in the cache.
 Three mapping functions:
▪ Direct mapping
▪ Associative mapping
▪ Set-associative mapping.
Direct mapping
Main
memory Block 0 •If a block is in cache, it must be in one specific place
Block 1
•Block j of the main memory maps to j modulo 128 of
Cache
tag
the cache. 0 maps to 0, 129 maps to 1.
Block 0 •More than one memory block is mapped onto the same
tag position in the cache.
Block 1
•May lead to contention for cache blocks even if the
Block 127
cache is not full.
Block 128 •Resolve the contention by allowing new block to
tag
Block 127 Block 129
replace the old block, leading to a trivial replacement
algorithm.
•Memory address is divided into three fields:
- Low order 4 bits determine one of the 16
words in a block.
Block 255 - When a new block is brought into the cache,
Tag Block Word
5 7 4 Block 256 the the next 7 bits determine which cache
block this new block is placed in.
Main memory address Block 257
- High order 5 bits determine which of the possible
32 blocks is currently present in the cache. These
are tag bits.
•Simple to implement but not very flexible.
Block 4095
Direct Mapping
Cache Line Table

Cache line Main Memory blocks held


0 0, m, 2m, 3m…2s-m

1 1,m+1, 2m+1…2s-m+1


m-1 m-1, 2m-1,3m-1…2s-1
Associative mapping
Main Block 0
memory

Cache Block 1 •Main memory block can be placed into any cache
tag position.
Block 0
•Memory address is divided into two fields:
tag
Block 1 - Low order 4 bits identify the word within a block
Block 127 - High order 12 bits or tag bits identify a memory
Block 128 block when it is resident in the cache.
tag •Flexible, and uses cache space efficiently.
Block 127 Block 129
•Replacement algorithms can be used to replace an
existing block in the cache when the cache is full.
•Cost is higher than direct-mapped cache because o
Block 255
the need to search all 128 patterns to determine
Tag Word whether a given block is in the cache.
12 4 Block 256

Main memory address Block 257

Block 4095
Associative Mapping from
Cache to Main Memory
Set-Associative mapping
Cache
Main Block 0 Blocks of cache are grouped into sets.
memory
tag Block 0 Mapping function allows a block of the main
Block 1
tag
memory to reside in any block of a specific set.
Block 1
Divide the cache into 64 sets, with two blocks per set.
tag Block 2 Memory block 0, 64, 128 etc. map to block 0, and they
tag Block 3 can occupy either of the two positions.
Block 63 Memory address is divided into three fields:
Block 64 - 6 bit field determines the set number.
tag - High order 6 bit fields are compared to the tag
Block 126 Block 65
fields of the two blocks in a set.
tag Set-associative mapping combination of direct and
Block 127
associative mapping.
Number of blocks per set is a design parameter.
Tag Block Word
Block 127 - One extreme is to have all the blocks in one set,
Block 128 requiring no set bits (fully associative mapping).
6 6 4
- Other extreme is to have one block per set, is
Block 129
Main memory address the same as direct mapping.

Block 4095
Other Performance Enhancements
Write buffer
 Write-through:
• Each write operation involves writing to the main memory.
• If the processor has to wait for the write operation to be complete, it
slows down the processor.
• Processor does not depend on the results of the write operation.
• Write buffer can be included for temporary storage of write requests.
• Processor places each write request into the buffer and continues
execution.
• If a subsequent Read request references data which is still in the write
buffer, then this data is referenced in the write buffer.
 Write-back:
• Block is written back to the main memory when it is replaced.
• If the processor waits for this write to complete, before reading the new
block, it is slowed down.
• Fast write buffer can hold the block to be written, and the new
block can be read first.
Virtual memories
 Recall that an important challenge in the design
of a computer system is to provide a large, fast
memory system at an affordable cost.
 Architectural solutions to increase the effective
speed and size of the memory system.
 Cache memories were developed to increase the
effective speed of the memory system.
 Virtual memory is an architectural solution to
increase the effective size of the memory system.

12
Virtual memories (contd..)
 Recall that the addressable memory space
depends on the number of address bits in a
computer.
 Physical main memory in a computer is generally
not as large as the entire possible addressable
space.
 Large programs that cannot fit completely into
the main memory have their parts stored on
secondary storage devices such as magnetic
disks.
▪ Pieces of programs must be transferred to the main memory from secondary
storage before they can be executed.

13
Virtual memories (contd..)
 When a new piece of a program is to be
transferred to the main memory, and the
main memory is full, then some other piece in
the main memory must be replaced.
▪ Recall this is very similar to what we studied in case of cache memories.
 Operating system automatically transfers data
between the main memory and secondary
storage.
▪ Application programmer need not be concerned with this transfer.
▪ Also, application programmer does not need to be aware of the limitations
imposed by the available physical memory.

14
Virtual memories (contd..)
 Techniques that automatically move program and data
between main memory and secondary storage when they
are required for execution are called virtual-memory
techniques.
 Programs and processors reference an instruction or data
independent of the size of the main memory.
 Processor issues binary addresses for instructions and
data.
▪ These binary addresses are called logical or virtual addresses.
 Virtual addresses are translated into physical addresses by
a combination of hardware and software subsystems.
▪ If virtual address refers to a part of the program that is currently in the main memory, it is accessed
immediately.
▪ If the address refers to a part of the program that is not currently in the main memory, it is first
transferred to the main memory before it can be used.

15
Virtual memory organization
Processor
•Memory management unit (MMU) translates
virtual addresses into physical addresses.
Virtual address
•If the desired data or instructions are in the
main memory they are fetched as described
Data MMU
previously.
•If the desired data or instructions are not in
Physical address
the main memory, they must be transferred
from secondary storage to the main memory.
Cache •MMU causes the operating system to bring
the data from the secondary storage into the
Data Physical address main memory.

Main memory

DMA transfer

Disk storage

16
Address translation
 Assume that program and data are composed of
fixed-length units called pages.
 A page consists of a block of words that occupy
contiguous locations in the main memory.
 Page is a basic unit of information that is
transferred between secondary storage and main
memory.
▪ Pages should not be too small, because the access time of a secondary storage
device is much larger than the main memory.
▪ Pages should not be too large, else a large portion of the page may not be used,
and it will occupy valuable space in the main memory.

17
Address translation (contd..)
• Concepts of virtual memory are similar to the
concepts of cache memory.
• Cache memory:
– Introduced to bridge the speed gap between the processor and the main
memory.
– Implemented in hardware.

• Virtual memory:
– Introduced to bridge the speed gap between the main memory and secondary
storage.
– Implemented in part by software.

18
Address translation (contd..)
 Each virtual or logical address generated by a
processor is interpreted as a virtual page number
(high-order bits) plus an offset (low-order bits) that
specifies the location of a particular byte within that
page.
 Information about the main memory location of each
page is kept in the page table.
▪ Main memory address where the page is stored.
▪ Current status of the page.
 Area of the main memory that can hold a page is
called as page frame.
 Starting address of the page table is kept in a page
table base register.

19
Address translation (contd..)
• Virtual page number generated by the
processor is added to the contents of the page
table base register.
– This provides the address of the corresponding entry in the page table.

• The contents of this location in the page table


give the starting address of the page if the
page is currently in the main memory.

20
Address translation (contd..)
PTBR holds Virtual address from processor
Page table base register
the address of
the page table. Page table address Virtual page number Offset
Virtual address is
interpreted as page
+ number and offset.
PAGE TABLE
PTBR + virtual
page number provide
the entry of the page This entry has the starting location
in the page table. of the page.

Page table holds information


about each page. This includes
the starting address of the page
in the main memory. Control Page frame
bits in memory Page frame Offset

Physical address in main memory

21
Address translation (contd..)
 Page table entry for a page also includes some
control bits which describe the status of the page
while it is in the main memory.
 One bit indicates the validity of the page.
▪ Indicates whether the page is actually loaded into the main memory.
▪ Allows the operating system to invalidate the page without actually removing it.
 One bit indicates whether the page has been
modified during its residency in the main
memory.
▪ This bit determines whether the page should be written back to the disk when it is
removed from the main memory.
▪ Similar to the dirty or modified bit in case of cache memory.

22
Address translation (contd..)
 Where should the page table be located?
 Recall that the page table is used by the MMU for
every read and write access to the memory.
▪ Ideal location for the page table is within the MMU.
 Page table is quite large.
 MMU is implemented as part of the processor chip.
 Impossible to include a complete page table on the
chip.
 Page table is kept in the main memory.
 A copy of a small portion of the page table can be
accommodated within the MMU.
▪ Portion consists of page table entries that correspond to the most recently accessed pages.

23
AUXILIARY MEMORIES
sectors
each track is
divided into pie-
shaped wedges

cluster tracks
two or more data is recorded in
sectors concentric circular
combined bands
32
Large Units Of Measurement
(Memory, Storage)

• Note: powers of two are used because computer


memory and storage are based on the basic unit
(bit).
• Kilobyte (KB) – a thousand bytes (1,024 = 210)
• Megabyte (MB) - a million (1,048,576 = 220)

41
Large Units Of Measurement
(Memory, Storage)

• Note: powers of two are used because computer


memory and storage are based on the basic unit
(bit).
• Kilobyte (KB) – a thousand bytes (1,024 = 210)
• Megabyte (MB) - a million (1,048,576 = 220)

42

You might also like