Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Cap Ese Q Answers

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

q.

1 : Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping and A set
associative cache consists of 64 lines or slots divided into four line sets. Main memory contains 4k blocks of
128 words each. Show the format of main memory addresses?

Direct-mapping Associative Mapping Set-Associative Mapping

Needs comparison with all tag


bits, i.e., the cache control
logic must examine every
block’s tag for a match at the Needs comparisons equal to
Needs only one comparison because of same time in order to number of blocks per set as
using direct formula to get the effective determine that a block is in the set can contain more
cache address. the cache/not. than 1 blocks.

Main Memory Address is divided into 3


fields : TAG, BLOCK & WORD. The
BLOCK & WORD together make an
index. The least significant TAG bits
identify a unique word within a block of
main memory, the BLOCK bits specify Main Memory Address is Main Memory Address is
one of the blocks and the Tag bits are divided into 1 fields : TAG & divided into 3 fields :
the most significant bits. WORD. TAG, SET & WORD.

The mapping of the main


There is one possible location in the memory block can be done
cache organization for each block from The mapping of the main with a particular cache
main memory because we have a fixed memory block can be done block of any direct-mapped
formula. with any of the cache block. cache.

If the processor need to In case of frequently


If the processor need to access same access same memory location accessing two different
memory location from 2 different main from 2 different main pages of the main memory
memory pages frequently, cache hit memory pages frequently, if reduced, the cache hit
ratio decreases. cache hit ratio has no effect. ratio reduces.

Search time is less here because there is


one possible location in the cache Search time is more as the
organization for each block from main cache control logic examines Search time increases with
memory. every block’s tag for a match. number of blocks per set.

Advantages-
 Simplest type of mapping Advantages-
 Fast as only tag field  It gives better
matching is required while performance
searching for a word. than the direct
 It is comparatively less Advantages- and associative
expensive than associative  It is fast. mapping
mapping.  Easy to implement techniques.

Disadvantages- Disadvantages- Disadvantages-


 It gives low performance  Expensive because  It is most
expensive as
it needs to store with the increase
because of the replacement address along with in set size cost
for data-tag value. the data. also increases.

Q2 Elaborate the concept of SRAM and DRAM memory with typical memory structure.

ANS: Static RAM (SRAM) is a type of random access memory that retains its state for data bits or holds
data as long as it receives the power.

 It is made up of memory cells and is called a static RAM as it does not need to be refreshed on a
regular basis because it does not need the power to prevent leakage, unlike dynamic RAM. So, it is
faster than DRAM.

 It has a special arrangement of transistors that makes a flip-flop, a type of memory cell.

 One memory cell stores one bit of data. Most of the modern SRAM memory cells are made of six
CMOS transistors, but lack capacitors.

 The access time in SRAM chips can be as low as 10 nanoseconds. Whereas, the access time in
DRAM usually remains above 50 nanoseconds.

 Furthermore, its cycle time is much shorter than that of DRAM as it does not pause between
accesses. Due to these advantages associated with the use of SRAM, It is primarily used for system
cache memory, and high-speed registers, and small memory banks such as a frame buffer on
graphics cards.

 The Static RAM is fast because the six-transistor configuration of its circuit maintains the flow of
current in one direction or the other (0 or 1).

 Dynamic Ram (DRAM) is also made up of memory cells.

 It is an integrated circuit (IC) made of millions of transistors and capacitors which are extremely
small in size and each transistor is lined up with a capacitor to create a very compact memory cell so
that millions of them can fit on a single memory chip. So, a memory cell of a DRAM has one
transistor and one capacitor and each cell represents or stores a single bit of data in its capacitor
within an integrated circuit.

 The capacitor holds this bit of information or data, either as 0 or as 1. The transistor, which is also
present in the cell, acts as a switch that allows the electric circuit on the memory chip to read the
capacitor and change its state.
 The capacitor needs to be refreshed after regular intervals to maintain the charge in the capacitor.
This is the reason it is called dynamic RAM as it needs to be refreshed continuously to maintain its
data or it would forget what it is holding.

6Q3: Define the term “Locality of Reference” How this concept is used in design of the memory system?

ANS: Locality of reference refers to the tendency of the computer program to access the same set of
memory locations for a particular time period. The property of Locality of Reference is mainly
shown by loops and subroutine calls in a program.

Q4: What do you mean by cache memory? Draw and explain the block diagram of cache memory.

ANS: The data or contents of the main memory that are used frequently by CPU are stored in the cache
memory so that the processor can easily access that data in a shorter time. Whenever the CPU needs to
access memory, it first checks the cache memory. If the data is not found in cache memory, then the CPU
moves into the main memory.

Cache memory is placed between the CPU and the main memory. The block diagram for a cache memory
can be represented as:

The cache is the fastest component in the memory hierarchy and approaches the speed of CPU components.

The basic operation of a cache memory is as follows:


o CPU needs to access memory, the cache is examined. If the word is found in the cache, it is read
from the fast memory.
o If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the
word.
o A block of words one just accessed is then transferred from main memory to cache memory. The
block size may vary from one word (the one just accessed) to about 16 words adjacent to the one just
accessed.
o The performance of the cache memory is frequently measured in terms of a quantity called hit ratio.
o When the CPU refers to memory and finds the word in cache, it is said to produce a hit.
o If the word is not found in the cache, it is in main memory and it counts as a miss.
o The ratio of the number of hits divided by the total CPU references to memory (hits plus misses) is
the hit ratio.

Q. 5 What is Write Back and Write Through Cache?

ANS: Write Through:

 In write-through, data is simultaneously updated to cache and memory. This process is simpler
and more reliable. This is used when there are no frequent writes to the cache(The number of write
operations is less).
 It helps in data recovery (In case of a power outage or system failure). A data write will experience
latency (delay) as we have to write to two locations (both Memory and Cache). It Solves the
inconsistency problem. But it questions the advantage of having a cache in write operation (As the
whole point of using a cache was to avoid multiple access to the main memory).
Write Back:
 The data is updated only in the cache and updated into the memory at a later time. Data is updated
in the memory only when the cache line is ready to be replaced (cache line replacement is done
using Belady’s Anomaly, Least Recently Used Algorithm, FIFO, LIFO, and others depending on
the application).
Write Back is also known as Write Deferred.
 Dirty Bit: Each Block in the cache needs a bit to indicate if the data present in the cache was
modified(Dirty) or not modified(Clean). If it is clean there is no need to write it into the memory.
It is designed to reduce write operation to a memory. If Cache fails or if the System fails or
power outages the modified data will be lost. Because it’s nearly impossible to restore data from
cache if lost
.
 If write occurs to a location that is not present in the Cache(Write Miss), we use two
options, Write Allocation and Write Around.

Q6: Explain the memory hierarchy in the computer system. What is its need? What is the reason for not
having large enough main memory for storing total information in a computer system?

ANS: In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory

such that it can minimize the access time

This Memory Hierarchy Design is divided into 2 main types:


1. External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage devices
which are accessible by the processor via I/O Module.
2. Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is directly accessible by
the processor.
We can infer the following characteristics of Memory Hierarchy Design from above figure:
1. Capacity:
It is the global volume of information the memory can store. As we move from top to bottom in
the Hierarchy, the capacity increases.
2. Access Time:
It is the time interval between the read/write request and the availability of the data. As we
move from top to bottom in the Hierarchy, the access time increases.
3. Performance:
Earlier when the computer system was designed without Memory Hierarchy design, the speed
gap increases between the CPU registers and Main Memory due to large difference in access
time. This results in lower performance of the system and thus, enhancement was required.
This enhancement was made in the form of Memory Hierarchy Design because of which the
performance of the system increases. One of the most significant ways to increase system
performance is minimizing how far down the memory hierarchy one has to go to manipulate
data.
4. Cost per bit:
As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal
Memory is costlier than External Memory.
5. There is just not enough space in one memory unit to accommodate all the programs used in a
typical computer. Moreover, most computer users accumulate and continue to accumulate large
amounts of data-processing software. Not all accumulated information is needed by the processor at
the same time.
6. Requirements of Memory Management System Memory management: keeps track of the status of each
memory location, whether it is allocated or free. It allocates the memory dynamically to the programs at their
request and frees it for reuse when it is no longer needed. Memory management meant to satisfy some
requirements that we should keep in mind.
- Relocation
-protection
-sharing
-logical organization
-physical rganization

Q7 What is a cache coherence problem ? How can it be solved?

ANS: The Cache Coherence Problem

In a multiprocessor system, data inconsistency may occur among adjacent levels or within the same level of
the memory hierarchy. For example, the cache and the main memory may have inconsistent copies of the
same object.
As multiple processors operate in parallel, and independently multiple caches may possess different copies
of the same memory block, this creates cache coherence problem. Cache coherence schemes help to avoid
this problem by maintaining a uniform state for each cached block of data.

When a write-back policy is used, the main memory will be updated when the modified data in the cache is
replaced or invalidated.
In general, there are three sources of inconsistency problem −
 Sharing of writable data
 Process migration
 I/O activity
Cache Coherence Problem solutions:
1.Write update -write through
2 write update -write back
3 write invalidate-write through
4 write invalidate – write back
Write back : Write operations are usually made only to the cache. Main memory is only
updated when the corresponding cache line is flushed from the cache.
Write through : All write operations are made to main memory as well as to the cache,
ensuring that main memory is always valid.

Q8 Explain the organization of Hard Disk

ANS: Hard disks are secondary storage devices we can use to store data. Most modern
computers use hard disks to store large amounts of data.

The architecture of a hard disk consists of several physical components that include:

 Platters
 Spindle
 Read/write heads
 Tracks
 Sectors

Platters

Hard disks are organized as a concentric stack of disks. An individual disk is referred to as
a platter.

Each platter consists of two surfaces: a lower and an upper surface.


Platters

Spindle
The platters within the hard disk are connected by a spindle that runs through the middle of the
platters.

The spindle moves in a unidirectional manner along its axis (either clockwise or counter
clockwise).

The movement of the spindle causes the platters to rotate as well.

Read/write head
Each surface on a platter contains a read/write head that is used to read or write data onto the
disk.
The read/write heads can move back and forth along the surface of a platter. Read/write heads
are in turn connected to a single actuator arm.

Tracks
Each surface of a platter consists of a fixed number of tracks. These are circular areas on the
surface of a platter that decrease in circumference as we move towards the center of the platter.

Data is first written to the outermost track.

Sectors
Each track is divided into a fixed number of sectors. Sectors divide track sections and store data.

Some important terms must be noted here:


1. Seek time – The time taken by the R-W head to reach the desired track from its current
position.
2. Rotational latency – Time is taken by the sector to come under the R-W head.
3. Data transfer time – Time is taken to transfer the required amount of data. It depends
upon the rotational speed.
4. Controller time – The processing time taken by the controller.
5. Average Access time – seek time + Average Rotational latency + data transfer time +
controller time.

Q9 What is RAID level? Explain RAID Level 1 to 6.

ANS: RAID or redundant array of independent disks is a data storage virtualization technology that
combines multiple physical disk drive components into one or more logical units for data redundancy,
performance improvement, or both.

It is a way of storing the same data in different places on multiple hard disks or solid-state drives to protect
data in the case of a drive failure. A RAID system consists of two or more drives working in parallel.

RAID combines several independent and relatively small disks into single storage of a large size. The disks
included in the array are called array members. The disks can combine into the array in different ways,
which are known as RAID levels.

RAID Types

1. RAID 0 (striped disks)

RAID 0 is taking any number of disks and merging them into one large volume. It will increase speeds as
you're reading and writing from multiple disks at a time. But all data on all disks is lost if any one disk fails.
An individual file can then use the speed and capacity of all the drives of the array. The downside to RAID
0, though, is that it is NOT redundant. The loss of any individual disk will cause complete data loss. This
RAID type is very much less reliable than having a single disk.
2. RAID 1 (mirrored disks)

It duplicates data across two disks in the array, providing full redundancy. Both disks are store exactly the
same data, at the same time, and at all times. Data is not lost as long as one disk survives. The total capacity
of the array equals the capacity of the smallest disk in the array. At any given instant, the contents of both
disks in the array are identical.

RAID 1 is capable of a much more complicated configuration. The point of RAID 1 is primarily for
redundancy. If you completely lose a drive, you can still stay up and running off the other drive.

RAID 5(striped disks with single parity)

RAID 5 requires the use of at least three drives. It combines these disks to protect data against loss of any
one disk; the array's storage capacity is reduced by one disk. It strips data across multiple drives to increase
performance. But, it also adds the aspect of redundancy by distributing parity information across the disks.

4. RAID 6 (Striped disks with double parity)

RAID 6 is similar to RAID 5, but the parity data are written to two drives. The use of additional parity
enables the array to continue to function even if two disks fail simultaneously. However, this extra
protection comes at a cost. RAID 6 has a slower write performance than RAID 5.
Q10: Explain LRU Algorithm for Cache Block Replacement

ANS: Least Recently Used (LRU) is a cache replacement algorithm that replaces cache when the space is
full. It allows us to access the values faster by removing the least recently used values. It is an important
algorithm for the operating system, it provides a page replacement method so that we get fewer
page faults.

The cache will be of fixed size and when it becomes full we will delete the value which is least
recently used and insert a new value. We will implement the get() and put() functions in the
LRU cache. The get() function is used to get value from a cache if it is present and if it is not
present in the cache then it will return -1 (page fault or cache miss). The put() function will
insert key and value in the cache.

Example – Consider the following reference string :


1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Find the number of page faults using the least recently used (LRU) page replacement
algorithm with 3-page frames.
Explanation –

You might also like