Cap Ese Q Answers
Cap Ese Q Answers
Cap Ese Q Answers
1 : Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping and A set
associative cache consists of 64 lines or slots divided into four line sets. Main memory contains 4k blocks of
128 words each. Show the format of main memory addresses?
Advantages-
Simplest type of mapping Advantages-
Fast as only tag field It gives better
matching is required while performance
searching for a word. than the direct
It is comparatively less Advantages- and associative
expensive than associative It is fast. mapping
mapping. Easy to implement techniques.
Q2 Elaborate the concept of SRAM and DRAM memory with typical memory structure.
ANS: Static RAM (SRAM) is a type of random access memory that retains its state for data bits or holds
data as long as it receives the power.
It is made up of memory cells and is called a static RAM as it does not need to be refreshed on a
regular basis because it does not need the power to prevent leakage, unlike dynamic RAM. So, it is
faster than DRAM.
It has a special arrangement of transistors that makes a flip-flop, a type of memory cell.
One memory cell stores one bit of data. Most of the modern SRAM memory cells are made of six
CMOS transistors, but lack capacitors.
The access time in SRAM chips can be as low as 10 nanoseconds. Whereas, the access time in
DRAM usually remains above 50 nanoseconds.
Furthermore, its cycle time is much shorter than that of DRAM as it does not pause between
accesses. Due to these advantages associated with the use of SRAM, It is primarily used for system
cache memory, and high-speed registers, and small memory banks such as a frame buffer on
graphics cards.
The Static RAM is fast because the six-transistor configuration of its circuit maintains the flow of
current in one direction or the other (0 or 1).
It is an integrated circuit (IC) made of millions of transistors and capacitors which are extremely
small in size and each transistor is lined up with a capacitor to create a very compact memory cell so
that millions of them can fit on a single memory chip. So, a memory cell of a DRAM has one
transistor and one capacitor and each cell represents or stores a single bit of data in its capacitor
within an integrated circuit.
The capacitor holds this bit of information or data, either as 0 or as 1. The transistor, which is also
present in the cell, acts as a switch that allows the electric circuit on the memory chip to read the
capacitor and change its state.
The capacitor needs to be refreshed after regular intervals to maintain the charge in the capacitor.
This is the reason it is called dynamic RAM as it needs to be refreshed continuously to maintain its
data or it would forget what it is holding.
6Q3: Define the term “Locality of Reference” How this concept is used in design of the memory system?
ANS: Locality of reference refers to the tendency of the computer program to access the same set of
memory locations for a particular time period. The property of Locality of Reference is mainly
shown by loops and subroutine calls in a program.
Q4: What do you mean by cache memory? Draw and explain the block diagram of cache memory.
ANS: The data or contents of the main memory that are used frequently by CPU are stored in the cache
memory so that the processor can easily access that data in a shorter time. Whenever the CPU needs to
access memory, it first checks the cache memory. If the data is not found in cache memory, then the CPU
moves into the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache memory
can be represented as:
The cache is the fastest component in the memory hierarchy and approaches the speed of CPU components.
In write-through, data is simultaneously updated to cache and memory. This process is simpler
and more reliable. This is used when there are no frequent writes to the cache(The number of write
operations is less).
It helps in data recovery (In case of a power outage or system failure). A data write will experience
latency (delay) as we have to write to two locations (both Memory and Cache). It Solves the
inconsistency problem. But it questions the advantage of having a cache in write operation (As the
whole point of using a cache was to avoid multiple access to the main memory).
Write Back:
The data is updated only in the cache and updated into the memory at a later time. Data is updated
in the memory only when the cache line is ready to be replaced (cache line replacement is done
using Belady’s Anomaly, Least Recently Used Algorithm, FIFO, LIFO, and others depending on
the application).
Write Back is also known as Write Deferred.
Dirty Bit: Each Block in the cache needs a bit to indicate if the data present in the cache was
modified(Dirty) or not modified(Clean). If it is clean there is no need to write it into the memory.
It is designed to reduce write operation to a memory. If Cache fails or if the System fails or
power outages the modified data will be lost. Because it’s nearly impossible to restore data from
cache if lost
.
If write occurs to a location that is not present in the Cache(Write Miss), we use two
options, Write Allocation and Write Around.
Q6: Explain the memory hierarchy in the computer system. What is its need? What is the reason for not
having large enough main memory for storing total information in a computer system?
ANS: In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory
In a multiprocessor system, data inconsistency may occur among adjacent levels or within the same level of
the memory hierarchy. For example, the cache and the main memory may have inconsistent copies of the
same object.
As multiple processors operate in parallel, and independently multiple caches may possess different copies
of the same memory block, this creates cache coherence problem. Cache coherence schemes help to avoid
this problem by maintaining a uniform state for each cached block of data.
When a write-back policy is used, the main memory will be updated when the modified data in the cache is
replaced or invalidated.
In general, there are three sources of inconsistency problem −
Sharing of writable data
Process migration
I/O activity
Cache Coherence Problem solutions:
1.Write update -write through
2 write update -write back
3 write invalidate-write through
4 write invalidate – write back
Write back : Write operations are usually made only to the cache. Main memory is only
updated when the corresponding cache line is flushed from the cache.
Write through : All write operations are made to main memory as well as to the cache,
ensuring that main memory is always valid.
ANS: Hard disks are secondary storage devices we can use to store data. Most modern
computers use hard disks to store large amounts of data.
The architecture of a hard disk consists of several physical components that include:
Platters
Spindle
Read/write heads
Tracks
Sectors
Platters
Hard disks are organized as a concentric stack of disks. An individual disk is referred to as
a platter.
Spindle
The platters within the hard disk are connected by a spindle that runs through the middle of the
platters.
The spindle moves in a unidirectional manner along its axis (either clockwise or counter
clockwise).
Read/write head
Each surface on a platter contains a read/write head that is used to read or write data onto the
disk.
The read/write heads can move back and forth along the surface of a platter. Read/write heads
are in turn connected to a single actuator arm.
Tracks
Each surface of a platter consists of a fixed number of tracks. These are circular areas on the
surface of a platter that decrease in circumference as we move towards the center of the platter.
Sectors
Each track is divided into a fixed number of sectors. Sectors divide track sections and store data.
ANS: RAID or redundant array of independent disks is a data storage virtualization technology that
combines multiple physical disk drive components into one or more logical units for data redundancy,
performance improvement, or both.
It is a way of storing the same data in different places on multiple hard disks or solid-state drives to protect
data in the case of a drive failure. A RAID system consists of two or more drives working in parallel.
RAID combines several independent and relatively small disks into single storage of a large size. The disks
included in the array are called array members. The disks can combine into the array in different ways,
which are known as RAID levels.
RAID Types
RAID 0 is taking any number of disks and merging them into one large volume. It will increase speeds as
you're reading and writing from multiple disks at a time. But all data on all disks is lost if any one disk fails.
An individual file can then use the speed and capacity of all the drives of the array. The downside to RAID
0, though, is that it is NOT redundant. The loss of any individual disk will cause complete data loss. This
RAID type is very much less reliable than having a single disk.
2. RAID 1 (mirrored disks)
It duplicates data across two disks in the array, providing full redundancy. Both disks are store exactly the
same data, at the same time, and at all times. Data is not lost as long as one disk survives. The total capacity
of the array equals the capacity of the smallest disk in the array. At any given instant, the contents of both
disks in the array are identical.
RAID 1 is capable of a much more complicated configuration. The point of RAID 1 is primarily for
redundancy. If you completely lose a drive, you can still stay up and running off the other drive.
RAID 5 requires the use of at least three drives. It combines these disks to protect data against loss of any
one disk; the array's storage capacity is reduced by one disk. It strips data across multiple drives to increase
performance. But, it also adds the aspect of redundancy by distributing parity information across the disks.
RAID 6 is similar to RAID 5, but the parity data are written to two drives. The use of additional parity
enables the array to continue to function even if two disks fail simultaneously. However, this extra
protection comes at a cost. RAID 6 has a slower write performance than RAID 5.
Q10: Explain LRU Algorithm for Cache Block Replacement
ANS: Least Recently Used (LRU) is a cache replacement algorithm that replaces cache when the space is
full. It allows us to access the values faster by removing the least recently used values. It is an important
algorithm for the operating system, it provides a page replacement method so that we get fewer
page faults.
The cache will be of fixed size and when it becomes full we will delete the value which is least
recently used and insert a new value. We will implement the get() and put() functions in the
LRU cache. The get() function is used to get value from a cache if it is present and if it is not
present in the cache then it will return -1 (page fault or cache miss). The put() function will
insert key and value in the cache.