Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Computer Memory Organization

Uploaded by

0abubakar221
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Computer Memory Organization

Uploaded by

0abubakar221
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Computer Memory Organization

What is a memory unit?


• A memory unit is an essential component in any digital
computer since it is needed for storing programs and data.
• Generally, memory/storage is classified into 2 categories:
i. Volatile Memory: This loses its data, when power is
switched off.
ii. Non-Volatile Memory: This is a permanent storage and
does not lose any data when power is switched off.
• Volatile memory establishes direct communication with the CPU
is called Main Memory.
• The main memory is often referred to as RAM (Random Access
Memory).
• None Volatile memory, provide backup storage are called
Auxiliary Memory.
• For instance, magnetic disks and magnetic tapes are the most
commonly used auxiliary memories.
• Apart from the basic classifications of a memory unit, the
memory hierarchy consists all of the storage devices available in
a computer system ranging from the slow but high-capacity
auxiliary memory to relatively faster main memory.
Memory Hierarchy
• The total memory capacity of a computer can be visualized
by hierarchy of components.
• The memory hierarchy system consists of all storage
devices contained in a computer system from the slow
Auxiliary Memory to fast Main Memory and to smaller
Cache memory.
• Auxillary memory access time is generally 1000 times that
of the main memory, hence it is at the bottom of the
hierarchy.
• The main memory occupies the central position because it
is equipped to communicate directly with the CPU and with
auxiliary memory devices through Input/output processor
(I/O).
• When the program not residing in main memory is needed
by the CPU, they are brought in from auxiliary memory.
• Programs not currently needed in main memory are
transferred into auxiliary memory to provide space in main
memory for other programs that are currently in use.
• The cache memory is used to store program data which is
currently being executed in the CPU.
• Approximate access time ratio between cache memory and
main memory is about 1 to 7~10
Memory Hierarchy interraction
Auxiliary Memory
• Auxiliary memory is known as the lowest-cost, highest-capacity
and slowest-access storage in a computer system.
• Auxiliary memory provides storage for programs and data that
are kept for long-term storage or when not in immediate use.
• The most common examples of auxiliary memories are
magnetic tapes and magnetic disks.
• A magnetic disk is a digital computer memory that uses a
magnetization process to write, rewrite and access data.
• For example, hard drives, zip disks, and floppy disks.
• Magnetic tape is a storage medium that allows for data
archiving, collection, and backup for different kinds of data.
Main Memory
• The main memory in a computer system is often referred
to as Random Access Memory (RAM).
• This memory unit communicates directly with the CPU and
with auxiliary memory devices through an I/O processor.
• The programs that are not currently required in the main
memory are transferred into auxiliary memory to provide
space for currently used programs and data.
• The main memory acts as the central storage unit in a
computer system.
• It is a relatively large and fast memory which is used to store
programs and data during the run time operations.
• The primary technology used for the main memory is based
on semiconductor integrated circuits.
• The integrated circuits for the main memory are classified
into two major units.
i. RAM (Random Access Memory) integrated circuit chips
ii. ROM (Read Only Memory) integrated circuit chips
RAM integrated circuit chips
• The RAM integrated circuit chips are further classified into
two possible operating modes, static and dynamic.
• The primary compositions of a static RAM are flip-flops that
store the binary information.
• The nature of the stored information is volatile, i.e. it
remains valid as long as power is applied to the system.
• The static RAM is easy to use and takes less time
performing read and write operations as compared to
dynamic RAM.
RAM integrated circuit chips ..
• The dynamic RAM exhibits the binary information in the
form of electric charges that are applied to capacitors.
• The capacitors are integrated inside the chip by MOS
transistors.
• The dynamic RAM consumes less power and provides large
storage capacity in a single memory chip.
• RAM chips are available in a variety of sizes and are used
as per the system requirement.
RAM integrated circuit chips ..
• The following block diagram demonstrates the chip
interconnection in a 128 * 8 RAM chip.
RAM integrated circuit chips ..
• A 128 * 8 RAM chip has a memory capacity of 128 words of
eight bits (one byte) per word. This requires a 7-bit address
and an 8-bit bidirectional data bus.
• The 8-bit bidirectional data bus allows the transfer of data
either from memory to CPU during a read operation or from
CPU to memory during a write operation.
• The read and write inputs specify the memory operation, and
the two chip select (CS) control inputs are for enabling the
chip only when the microprocessor selects it.
• The bidirectional data bus is constructed using three-state
buffers.
• The output generated by three-state buffers can be placed
in one of the three possible states which include a signal
equivalent to logic 1, a signal equal to logic 0, or a high-
impedance state.
• A tri-state buffer adds an additional "enable" input that
controls whether the primary input is passed to its output
or not.
• If the "enable" inputs signal is true, the tri-state buffer
behaves like a normal buffer.
• If the "enable" input signal is false, the tri-state buffer
passes a high impedance (or hi-Z) signal, which effectively
disconnects its output from the circuit.
• Tri-state buffers are often connected to a bus which allows
multiple signals to travel along the same connection.
Truth table for a tri-state buffer

Enable Input Input A Output


false false hi-Z
false true hi-Z
true false false
true true true
A tri-state buffer symbol
• The logic 1 and 0 are standard digital signals whereas the
high-impedance state behaves like an open circuit, which
means that the output does not carry a signal and has no
logic significance.
• From the functional table below, we can conclude that the
unit is in operation only when CS1 = 1 and CS2 = 0.
• The bar on top of the second select variable indicates that
this input is enabled when it is equal to 0.
ROM integrated circuit
• The primary component of the main memory is RAM
integrated circuit chips, but a portion of memory may be
constructed with ROM chips.
• A ROM memory is used for keeping programs and data
that are permanently resident in the computer.
• Apart from the permanent storage of data, the ROM
portion of main memory is needed for storing an initial
program called a bootstrap loader.
• The primary function of the bootstrap loader program is to
start the computer software (operating system) when
power is turned on.
ROM integrated circuit
• ROM chips are also available in a variety of sizes and are
also used as per the system requirement.
• The following block diagram demonstrates the chip
interconnection in a 512 * 8 ROM chip.
• A ROM chip has a similar organization as a RAM chip.
However, a ROM can only perform read operation; the
data bus can only operate in an output mode.
• The 9-bit address lines in the ROM chip specify any one of
the 512 bytes stored in it.
• The value for chip select 1 and chip select 2 must be 1 and
0 for the unit to operate.
• Otherwise, the data bus is said to be in a high-impedance
state.
Cache Memory
• Cache memory is a high-speed memory, which is small in
size but faster than the main memory (RAM).
• The CPU can access it more quickly than the primary
memory.
• So, it is used to synchronize with high-speed CPU and to
improve its performance.
Cache Memory
• Cache memory can only be accessed by CPU.
• It holds the data and programs which are frequently used by
the CPU.
• So, it makes sure that the data is instantly available for CPU
whenever the CPU needs this data.
• In other words, if the CPU finds the required data or
instructions in the cache memory, it doesn't need to access
the primary memory (RAM).
• Thus, by acting as a buffer between RAM and CPU, it speeds
up the system performance.
Types of Cache Memory
• L1: It is the first level of cache memory, which is called Level
1 cache or L1 cache.
• In this type of cache memory, a small amount of memory is
present inside the CPU itself.
• If a CPU has four cores (quad core cpu), then each core will
have its own level 1 cache.
• As this memory is present in the CPU, it can work at the
same speed as of the CPU.
• The size of this memory ranges from 2KB to 64 KB.
• The L1 cache further has two types of caches: Instruction
cache, which stores instructions required by the CPU, and
the data cache that stores the data required by the CPU.
Level 2 cache
• L2: This cache is known as Level 2 cache or L2 cache. This
level 2 cache may be inside the CPU or outside the CPU.
• All the cores of a CPU can have their own separate level 2
cache, or they can share one L2 cache among themselves.
• In case it is outside the CPU, it is connected with the CPU
with a very high-speed bus.
• The memory size of this cache is in the range of 256 KB to
the 512 KB. In terms of speed, they are slower than the L1
cache.
Level 3 cache
• L3: It is known as Level 3 cache or L3 cache.
• This cache is not present in all the processors; some high-
end processors may have this type of cache.
• This cache is used to enhance the performance of Level 1
and Level 2 cache.
• It is located outside the CPU and is shared by all the cores
of a CPU. Its memory size ranges from 1 MB to 8 MB.
• Although it is slower than L1 and L2 cache, it is faster than
Random Access Memory (RAM).
Types of Cache Memory
How does cache memory work with CPU?

• When CPU needs the data, first of all, it looks inside the L1
cache.
• If it does not find anything in L1, it looks inside the L2
cache.
• If again, it does not find the data in L2 cache, it looks into
the L3 cache.
• If data is found in the cache memory, then it is known as a
cache hit.
• On the contrary, if data is not found inside the cache, it is
called a cache miss.
• If data is not available in any of the cache memories, it looks
inside the Random Access Memory (RAM).
• If RAM also does not have the data, then it will get that data
from the Hard Disk Drive.
• So, when a computer is started for the first time, or an
application is opened for the first time, data is not available
in cache memory or in RAM.
• In this case, the CPU gets the data directly from the hard disk
drive.
• Thereafter, when you start your computer or open an
application, CPU can get that data from cache memory or
RAM.
Associative Memory
• An associative memory can be considered as a memory
unit whose stored data can be identified for access by the
content of the data itself rather than by an address or
memory location.
• When a write operation is performed on associative
memory, no address or memory location is given to the
word.
• The memory itself is capable of finding an empty unused
location to store the word.
• On the other hand, when the word is to be read from an
associative memory, the content of the word, or part of the
word, is specified.
• The words which match the specified content are located
Representation of an Associative
memory
• From the block diagram, we can say that an associative
memory consists of a memory array and logic for 'm' words
with 'n' bits per word.
• The functional registers like the argument register A and key
register K each have n bits, one for each bit of a word.
• The match register M consists of m bits, one for each
memory word.
• The words which are kept in the memory are compared in
parallel with the content of the argument register.
• The key register (K) provides a mask for choosing a
particular field or key in the argument word.
• If the key register contains a binary value of all 1's, then the
entire argument is compared with each memory word.
• Otherwise, only those bits in the argument that have 1's in
their corresponding position of the key register are
compared.
• Thus, the key provides a mask for identifying a piece of
information which specifies how the reference to memory
is made.
Memory Access Methods
• Each memory type, is a collection of numerous memory
locations.
• To access data from any memory, first it must be located
and then the data is read from the memory location.
• Following are the methods to access information from
memory locations
Memory Access Methods
• Random Access: Main memories are random access
memories, in which each memory location has a unique
address.
• Using this unique address any memory location can be
reached in the same amount of time in any order.
• Sequential Access: This methods allows memory access in
a sequence or in order.
• Direct Access: In this mode, information is stored in tracks,
with each track having a separate read/write head.
Random Access
• In this method, any location of the memory can be
accessed randomly like accessing in Array.
• Physical locations are independent in this access method.
• Most used in RAM and ROM
Direct Access
• In this method, individual blocks or records have a unique
address based on physical location.
• Access is accomplished by direct access to reach a general
vicinity plus sequential searching, counting or waiting to
reach the final destination.
• This method is a combination of above two access methods.
• The access time depends on both the memory organization
and characteristics of storage technology. The access is
semi-random or direct.
• Application of direct memory access is magnetic hard disk,
read/write header
Sequential Access
• In this method, the memory is accessed in a specific linear
sequential manner.
• The access time depends on the location of the data.
• Applications of this sequential memory access are magnetic
tapes, magnetic disk and optical memories.
Associate Access
• In this memory, a word is accessed rather than its address.
• This access method is a special type of random access
method.
• Application of thus Associate memory access is Cache
memory.
Buffering in Computer
• Buffer is a region of memory used to temporarily hold data
while it is being moved from one place to another.
• A buffer is used when moving data between processes within
a computer. Majority of buffers are implemented in software.
• Buffers are generally used when there is a difference between
the rate at which data is received and the rate at which it can
be processed.
• If we remove buffers, then either we will have data loss, or we
will have lower bandwidth utilization
• In Buffering, Whether the communication is direct or
indirect, message exchanged by communicating processes
reside in a temporary queue.
• Buffering is typically used to manage data flow between
devices and helps regulate the rate of data transfer.
• The buffer allows the sender to transmit data at a faster
rate while the receiver processes the data at its own pace.
What's the role of a buffer in computer
architecture?
• A buffer in computer architecture serves as a temporary
storage area for data while it's being transferred between
two devices or processes.
• In more detail, a buffer is a region of physical memory
storage used to temporarily hold data while it is being
moved from one place to another.
• It's a crucial component in computer architecture as it
allows for the smooth and efficient transfer of data.
• This is particularly important when there's a difference in
speed between the source and destination of the data.
• For instance, when data is being transferred from a fast
device like a hard drive to a slower one like a printer, a
buffer can store the data from the hard drive while the
printer catches up.
• Buffers are also used in the management of data flows
between processes running at different speeds or with
different priorities.
• They can help to prevent bottlenecks in data flow and
ensure that processes run smoothly without interruption
• For example, when streaming a video online, a buffer is
used to store a certain amount of video data ahead of
what's currently being viewed.
• This allows the video to continue playing smoothly even if
there's a temporary slowdown in the internet connection.
• In addition, buffers are used in the implementation of
various data structures in computer programming, such as
stacks and queues.
• They are also used in the design of many types of
computer hardware, including CPUs and GPUs, where they
help to manage the flow of data between different parts of
the system.
Types of Buffering
i. Zero Capacity – This queue cannot keep any message
waiting in it. Thus it has maximum length 0. For this, a
sending process must be blocked until the receiving
process receives the message. It is also known as no
buffering.
ii. Bounded Capacity – This queue has finite length n. Thus
it can have n messages waiting in it. If the queue is not
full, new message can be placed in the queue, and a
sending process is not blocked. It is also known as
automatic buffering.
iii. Unbounded Capacity – This queue has infinite length.
Thus any number of messages can wait in it. In such a
system, a sending process is never blocked
Advantages of Buffering
i. It helps in matching speed between two devices, between
which the data is transmitted.
For example, a hard disk has to store the file received
from the modem.
ii. It helps the devices with different data transfer size to get
adapted to each other.
iii. It helps devices to manipulate data before sending or
receiving.
Advantages of Buffering
i. Regulating data flow between devices
ii. Allowing the sender to transmit data at a faster rate
iii. Preventing lost data due to network congestion
Disadvantages of Buffering
• Increased latency due to the time it takes to store and
retrieve data from the buffer
• Increased memory usage due to the buffer’s storage
requirements
Buffer Overflow and Underflow
• Buffer overflow occurs when more data is sent to a buffer
than it can handle, leading to data loss or corruption.
• Buffer underflow occurs when there is not enough data in
the buffer, leading to a delay in data transfer.
Application of Buffering
• Buffering is a critical component of streaming services,
such as Netflix and YouTube.
• When you stream a video, the data is transmitted in
packets and stored in a buffer before being displayed on
your screen.
• Buffering ensures that the video plays

You might also like