Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Coa Unit 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

 DRAM is made up of capacitors and transistors.

 memory refers to electronic storage space used to store  Each DRAM memory cell comprises a capacitor (for storing
data and instructions that processor can access quickly. charge, representing a bit) and a transistor (controlling
 There are several types of memory in a computer, each access to stored information).
serving different purposes.  DRAM is volatile, requiring constant power; data is lost if
power is interrupted.
 Periodic refresh cycles are necessary to counteract charge
leakage and maintain data integrity.
 DRAM provides high memory density, suitable for main
 It is organized in layers, with the fastest and most
memory (RAM) in computers.
expensive memory at the top and slower, larger at the
 DRAM is relatively cost-effective compared to other
bottom and more affordable memory types below:
memory types.
 Types of DRAM:
 Each designed for specific applications and offering
improvements in speed and efficiency.
 Synchronous DRAM (SDRAM)
 Double Data Rate (DDR) SDRAM
 Graphics DRAM (GDDR),

 SRAM utilizes a flip-flop circuit made of transistors.


 SRAM is volatile, resulting in data loss when power is turned
off.
 SRAM is faster than DRAM, providing quick access times,
ideal for cache memory.
 SRAM is more complex and generally more expensive to
manufacture than DRAM. It is often used in smaller
quantities, particularly for cache memory.
 Registers: Extremely fast, small storage locations within the
CPU for actively processed data.
 The memory is structured as a 2D matrix, with rows(word
 Cache Memory: Small-sized, high-speed memory near the lines) and columns(bit lines ) .
CPU, storing frequently used programs and data.  Each row in the matrix represents a word, which is a fixed-
size unit of data.
 Random Access Memory (RAM): Volatile, temporary  The decoder is a combinational circuit with n input lines and
memory for quick data access. 2^n output lines.
 The n input lines take the address information from the
 Solid-State Drives (SSD): Non-volatile storage using flash Memory Address Register (MAR).
memory for faster access than traditional Hard Disk Drives  The decoder processes the input address to generate 2^n
(HDDs). output lines.
 One of the output lines is selected based on the specific
 Hard Disk Drives (HDD): Magnetic storage on spinning disks address provided in the MAR.
for larger but slower access.

 Optical Drives, Tape Drives, and Cloud Storage: Lower-


speed options for long-term archival storage.

 "The Purpose and Benefits of Organizing Computer


Memory in a Hierarchical System"
Cost Speed Capacity
Volatility Hierarchy flexibility Optimizing performance

 The selected output line from the decoder corresponds to a


specific row in the matrix.
 This row is then accessed for reading or writing.
 Semiconductor RAM refers to a class of computer memory  Once the row is selected, the word of data in that row can
technologies that utilize semiconductor materials for be accessed through the data lines.
storing and accessing data.  For reading, the data is transferred from the selected row to
 The semiconductor RAM family includes several types, with the requesting entity (e.g., CPU or I/O controller).
Dynamic RAM (DRAM) and Static RAM (SRAM) being two of For writing, new data can be written into the selected word
the most common. in the row.
 In 2.5D memory organization, there are two decoders: a  Cache memory is high-speed volatile computer memory.It
column decoder and a row decoder. provides rapid data access to a processor.
 The column decoder selects the column, while the row  Stores frequently used computer programs, applications,
decoder selects the row based on the address from the and data.
Memory Address Register (MAR).  Crucial for improving overall computer system speed and
 The selected cell is determined through the bit outline, performance.
enabling the read or write operation at that memory  Store frequently accessed data and instructions close to the
location using the data lines. CPU.
 Minimizes the time to access data and instructions
compared to main memory (RAM).

 There are typically three levels of cache in a modern


computer system:
 L1 Cache: Smallest, fastest, on CPU chip, separate instruction
and data caches for independent handling.
 In reading mode, the word/bit represented by the Memory  L2 Cache: Slightly larger, slower than L1, often shared among
Address Register (MAR) becomes available on the data lines. different cores in a multi-core processor, provides additional
 This enables the retrieval of the desired data from the storage for frequently accessed data.
selected memory cell for reading.  L3 Cache: Larger, slower than L2, shared among all cores in a
 In write mode, data from the Memory Data Register (MDR) multi-core processor, additional storage for frequently used
is sent to the memory cell addressed by the Memory data that may not fit in L1 and L2 caches.
Address Register (MAR).
 The select line helps in identifying the specific cell,  Terminologies in cache Memory
facilitating the write operation with the data from the MDR.

 In 2D orgn hardware is fixed but in 2.5D hardware changes.


 2D orgn requires more gates while 2.5D requires less.
 2D is more complex in comparison to the 2.5D orgn .
 Cache Lines: Fixed-size blocks storing small amounts of data.
 Error correction is not possible in the 2D orgn but in 2.5D it
 Cache Size: Total amount of data cache can hold, measured
could be done easily.
in KB or MB.
 2D is more difficult to fabricate in comparison to the 2.5D
 Hit and Miss: Hit when data found, miss when not (requires
orgn . this line write in different with bullets points
retrieval from higher-level cache or main memory).
 Tag, Index, and Offset: Divides physical address for address
mapping in the cache.
 ROM stands for Read-Only Memory.
It is a type of non-volatile memory that retains its contents  How performance of cache memory is measured :
even when the power is turned off.  Perfomance of cache memory is frequently measured in
terms of a quantity called hit ratio.
 When the cpu refers to memory and finds the word in cache ,
 PROM (Programmable ROM): Allows for a one-time it is said to produce a hit .
programmability, where the data is written (or  If the word is not in cache , it is in main memory and it
programmed) into the memory by the user. Once counts as a miss .
programmed, the data becomes permanent.  Ratio of hit and miss is called hit ration
 EPROM (Erasable Programmable ROM): Users can erase  This is the best way to measured the performance of the
and reprogram the memory multiple times. However, cache memory .
erasure typically involves exposure to ultraviolet light.
 EEPROM (Electrically Erasable Programmable ROM):
Similar to EPROM but can be erased and reprogrammed  Cache mapping refers to the technique used to determine
electrically, without the need for ultraviolet light. EEPROM how data is mapped between the main memory (RAM) and
allows for more convenient updates. the cache memory.
 Mask ROM:In some cases, the ROM is created with fixed  The primary goal of cache mapping is to efficiently manage
data during the manufacturing process and is referred to as the transfer of data between these two levels of memory to
mask ROM. The data is "masked" onto the ROM chip during optimize overall system performance.
fabrication.
 Each block of main memory maps to a specific block in the
cache.
 A simple and efficient technique, but it may lead to cache
conflicts.
 One cache line corresponds to one specific block in main
memory.
 Advantages:
 Simplicity in hardware implementation.
 Lower hardware cost.
 Disadvantages:
 Limited flexibility may lead to more conflicts and cache
misses.

 Definition: Simultaneous update of both cache and main


memory with every write operation.
 Process:
 Cache modification immediately reflected in main memory.
 Ensures main memory holds the most up-to-date
information.
 Advantages:
 Guarantees data consistency between cache and main
memory.
 Simplifies management and recovery in system failures.
 Disadvantages:
 Slower write performance due to dual updates.
 Higher bus traffic from frequent main memory updates.
 Any block of main memory can be placed in any line of the
cache.
 Offers maximum flexibility but is more complex to
implement.
 Reduces the likelihood of cache conflicts.  Cache-only update with "dirty" marking for outdated main
 Advantages: memory.
 Maximum flexibility, reducing conflicts.  Read operation on "dirty" location triggers cache content
 Potential for high cache hit rates. write-back to main memory.
 Disadvantages:  Delayed writes reduce write frequency, minimizing bus
 More complex and expensive hardware. traffic.
 Potentially faster write performance, updating only the
cache.
 Reduces bus traffic, beneficial for bandwidth-limited systems.
 Potential for data inconsistency until write-back occurs.
 Complex management of dirty blocks, possible recovery
issues after system failures.

 Combines aspects of direct and fully associative mapping.


 Divides the cache into sets, with each set containing
multiple lines (blocks).
 Each block of main memory can map to any line within a
specific set.
 Provides a balance between simplicity and flexibility.
 Advantages:
 Offers a balance between flexibility and simplicity.
 Reduces conflicts compared to direct-mapped caches.
 Disadvantages:
 Moderately complex hardware.
 Solid State Drives (SSDs):
 SSDs use NAND-based flash memory for storage,
providing faster access times than HDDs.
 Oldest page in the cache is the first to be replaced.  Faster read/write speeds, lower mechanical failure risk
 Track the order of page arrivals. compared to HDDs.
 When a page needs replacement, replace the oldest one.
 Simple and easy to implement.  Optical Storage (CDs, DVDs, Blu-ray):
 May not consider recent usage, leading to suboptimal  Optical discs store data through pits and lands on the
performance. disc surface, read using lasers.
 Portable, widely used for distribution, slower access
compared to HDDs and SSDs.

 USB Flash Drives:


 Flash drives use NAND-based flash memory for
portable data storage.
 Portable, small form factor, faster than optical storage.

 Magnetic Tapes:
 Magnetic tapes use sequential access for data storage.
 The least recently used page is replaced.  Used for backup and archival purposes, slower access
 Track the order of page accesses. times.
 Replace the least recently used page when needed.
 Adapts well to temporal locality.  Memory Cards (SD cards, microSD cards):
 Implementation complexity.  Compact storage devices commonly used in cameras,
 Overhead in tracking access history. smartphones, and other devices.
 Portable, small form factor, widely used in mobile
devices.

 External Hard Drives:


 Portable hard drives for additional storage capacity.
 Characteristics: Combines high capacity with portability.

 Theoretical algorithm with complete knowledge of future


page accesses.
 Select the page not used for the longest period in the
future.
 Serves as a benchmark for other algorithms.
 Theoretically optimal decisions.
 Impractical in real-world scenarios.
 Not implementable due to the need for future access
knowledge.

 Auxiliary memory, also known as secondary storage or


secondary memory, refers to non-volatile storage devices
used to store data permanently or semi-permanently.
 Unlike primary memory (RAM), which is volatile and loses
its contents when the power is turned off, auxiliary memory
retains data even when the power is off. Common types of
auxiliary memory include:

 Hard Disk Drives (HDDs):


 HDDs are magnetic storage devices with spinning disks and
read/write heads.
 High capacity, relatively slower access times compared to
RAM
 Virtual Memory is a storage scheme that creates an illusion
of a large main memory by treating a portion of secondary
memory as if it were the main memory.
 Enables loading larger processes than available main Question 1: A memory system has 2048 memory locations.
memory. Calculate the number of address lines needed.
 Allows the execution of multiple processes with an illusion Question 2: A RAM chip has 4096 cells, each storing 8 bits of data.
of abundant memory. Determine the number of output lines.
 Instead of loading one large process entirely into main
memory, the OS loads different parts of multiple processes. Question 3: An EPROM has 8192 memory locations, and each
 Supports efficient multitasking. location can store 16 bits. Calculate the number of input lines
 Enhances the illusion of abundant memory for user required.
processes. Question 5: A ROM chip has 256 address lines. Calculate the
 When insufficient main memory for page loading: total number of unique memory locations it can address.
 OS identifies least-used or unreferenced RAM areas. Answers:
 Copies those areas to secondary memory to create space
for new pages. Question 1:
 All procedures happen automatically, providing the
computer with a perception of virtually unlimited RAM. Number of Address Lines=log 2(Number of Memory Locations)
Number of Address Lines=log2 (Number of Memory Locations)
Number of Address Lines=log 2(2048)=11Number of Address L
ines=log2 (2048)=11 address lines.

 Interleaved memory is a memory organization technique Question 2:


where consecutive memory addresses are distributed
across multiple memory modules or banks. Number of Output Lines=Data WidthNumber of Output Lines=Da
 Data is stored in a round-robin fashion across the modules, ta Width
allowing for parallel access to multiple modules Number of Output Lines=8Number of Output Lines=8 output lines.
simultaneously.
 Enhances memory bandwidth by enabling concurrent Question 3:
access to different memory banks.
 Distributes memory access load across modules. Number of Address Lines=log2 (8192)=13
 Increased complexity in memory controller design.
Question 5:

Number of Memory Locations=2Number of Address Lines


 Associative memory, or Content-Addressable Memory Number of Memory Locations=2Number of Address Lines
(CAM), is a type of computer memory that allows content- Number of Memory Locations=2256Number of Memory Location
based addressing. It enables the retrieval of data based on s=2256 unique memory locations (a very large number).
its content rather than a specific memory address.
 CAM compares data input with stored content and returns
the corresponding memory location.
 Enables fast data retrieval based on content.
 Useful in applications like network routing tables and cache
memory.
 Higher cost and complexity compared to traditional
memory.

You might also like