Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Os Ans

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Will mostly ask choice topic based questions.

1. What is Concurrency?
Concurrency is the execution of the multiple instruction sequences at the same time. It
happens in the operating system when there are several process threads running in parallel.
The running process threads always communicate with each other through shared memory or
message passing.
2. What is Mutual Exclusion?
A mutual exclusion (mutex) is a program object that prevents simultaneous access to a
shared resource. This concept is used in concurrent programming with a critical section, a
piece of code in which processes or threads access a shared resource.

3. What is the Critical Section in OS?


Critical Section refers to the segment of code or the program which tries to access
or modify the value of the variables in a shared resource.

4. Difference between program and process.

Sr.No. Program Process


1. Program contains a set of Process is an instance of an
instructions designed to executing program.
complete a specific task.
2. Program is a passive entity as Process is a active entity as it is
it resides in the secondary created during execution and loaded
memory. into the main memory.
3. Program exists at a single Process exists for a limited span of
place and continues to exist time as it gets terminated after the
until it is deleted. completion of task.
4. Program is a static entity. Process is a dynamic entity.
5. Program does not have any Process has a high resource
resource requirement, it only requirement, it needs resources like
requires memory space for CPU, memory address, I/O during its
storing the instructions. lifetime.
6. Program does not have any Process has its own control block
control block. called Process Control Block.
7. Program has two logical In addition to program data, a
components: code and data. process also requires additional
information required for the
management and execution.
8. Program does not change Many processes may execute a
itself. single program. There program code
may be the same but program data
may be different. these are never
same.
5. Difference between instruction and process.
A program is a set of instructions that are used to perform a certain task. A
process is a program being executed. A program is stored in the secondary memory
and has a higher lifespan when compared with the process. A process is active until the
program is executed and has a short lifespan as compared to a program.

6. Page Replacement policies


First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm,
the operating system keeps track of all pages in the memory in a queue, the oldest
page is in the front of the queue. When a page needs to be replaced page in the front of
the queue is selected for removal. 
Optimal Page replacement: In this algorithm, pages are replaced which would not be
used for the longest duration of time in the future.
Least Recently Used: In this algorithm, page will be replaced which is least recently used. 
Most Recently Used (MRU): In this algorithm, page will be replaced which has been used
recently.

7. Disk scheduling algorithms


Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.
FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.Let us understand this with the help of an
example.
Advantages:
● Every request gets a fair chance
● No indefinite postponement

Disadvantages:
● Does not try to optimize seek time
● May not provide the best possible service

SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed
first. So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm
will get executed first. SSTF is certainly an improvement over FCFS as it decreases the
average response time and increases the throughput of system.Let us understand this with the
help of an example.
Advantages:
● Average Response Time decreases
● Throughput increases

Disadvantages:

● Overhead to calculate seek time in advance


● Can cause Starvation for a request if it has higher seek time as
compared to incoming requests
● High variance of response time as SSTF favours only some requests

SCAN: In SCAN algorithm the disk arm moves into a particular direction and
services the requests coming in its path and after reaching the end of disk, it
reverses its direction and again services the request arriving in its path. So, this
algorithm works as an elevator and hence also known as elevator algorithm.
As a result, the requests at the midrange are serviced more and those arriving
behind the disk arm will have to wait.

Advantages:

● High throughput
● Low variance of response time
● Average response time

Disadvantages:

● Long waiting time for requests for locations just visited by disk arm
● CSCAN: In SCAN algorithm, the disk arm again scans the path that has
been scanned, after reversing its direction. So, it may be possible that too
many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.

Advantages: Provides more uniform wait time compared to SCAN


1. RSS– It stands for random scheduling and just like its name it is
nature. It is used in situations where scheduling involves random
attributes such as random processing time, random due dates, random
weights, and stochastic machine breakdowns this algorithm sits
perfect. Which is why it is usually used for and analysis and simulation.
2. LIFO– In LIFO (Last In, First Out) algorithm, newest jobs are serviced
before the existing ones i.e. in order of requests that get serviced the
job that is newest or last entered is serviced first and then the rest in
the same order.
Advantages
○ Maximizes locality and resource utilization
○ Can seem a little unfair to other requests and if new requests
keep coming in, it cause starvation to the old and existing
ones.
3. N-STEP SCAN – It is also known as N-STEP LOOK algorithm. In this
a buffer is created for N requests. All requests belonging to a buffer will
be serviced in one go. Also once the buffer is full no new requests are
kept in this buffer and are sent to another one. Now, when these N
requests are serviced, the time comes for another top N requests and
this way all get requests get a guaranteed service
Advantages
○ It eliminates starvation of requests completely
4. FSCAN– This algorithm uses two sub-queues. During the scan all
requests in the first queue are serviced and the new incoming requests
are added to the second queue. All new requests are kept on halt until
the existing requests in the first queue are serviced.
Advantages
○ FSCAN along with N-Step-SCAN prevents “arm stickiness”
(phenomena in I/O scheduling where the scheduling
algorithm continues to service requests at or near the current
sector and thus prevents any seeking)

Multithreading Models
Conditions for deadlock:
1. Mutual exclusion: When two people meet in the landings, they can’t just walk through
because there is space only for one person. This condition allows only one person (or
process) to use the step between them (or the resource) is the first condition necessary
for the occurrence of the deadlock.
2. No preemption: For resolving the deadlock one can simply cancel one of the processes
for other to continue. But the Operating System doesn’t do so. It allocates the resources
to the processors for as much time as is needed until the task is completed. Hence,
there is no temporary reallocation of the resources. It is the third condition for deadlock.
3. Hold and wait: When the two people refuse to retreat and hold their ground, it is called
holding. This is the next necessary condition for deadlock.
4. Circular wait: When the two people refuse to retreat and wait for each other to retreat so
that they can complete their task, it is called circular wait. It is the last condition for
deadlock to occur.

Deadlock prevention:
1. Eliminate Mutual Exclusion
It is not possible to dis-satisfy the mutual exclusion because some resources, such as
the tape drive and printer, are inherently non-shareable.
2. Eliminate hold and wait
Allocate all required resources to the process before the start of its execution, this way
hold and wait condition is eliminated but it will lead to low device utilization. for example,
if a process requires printer at a later time and we have allocated printer before the start
of its execution printer will remain blocked till it has completed its execution. The
process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.
3. Eliminate No preemption
Preempt resources from the process when resources required by other high priority
processes.
4. Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the
resources increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4,
R3 lesser than R5 such request will not be granted, only request for resources more
than R5 will be granted.

Types of multithreading models


Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In multi-threads,
the same process or task can be done by the number of threads, or we can say that there is
more than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.

The main drawback of single threading systems is that only one task can be performed at a
time, so to overcome the drawback of this single threading, there is multithreading that allows
multiple tasks to be performed.

For example:
In the above example, client1, client2, and client3 are accessing the web server without any
waiting. In multithreading, several tasks can run at the same time.

In an operating system, threads are divided into the user-level thread and the Kernel-level
thread. User-level threads handled independent form above the kernel and thereby managed
without any kernel support. On the opposite hand, the operating system directly manages the
kernel-level threads. Nevertheless, there must be a form of relationship between user-level and
kernel-level threads.

There exists three established multithreading models classifying these relationships are:

Many to one multithreading model


One to one multithreading model
Many to Many multithreading models
Many to one multithreading model:
The many to one model maps many user levels threads to one kernel thread. This type of
relationship facilitates an effective context-switching environment, easily implemented even on
the simple kernel with no thread support.

The disadvantage of this model is that since there is only one kernel-level thread schedule at
any given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi-processor systems. In this, all the thread management is done
in the userspace. If blocking comes, this model blocks the whole system.
In the above figure, the many to one model associates all user-level threads to single
kernel-level threads.

One to one multithreading model


The one-to-one model maps a single user-level thread to a single kernel-level thread. This type
of relationship facilitates the running of multiple threads in parallel. However, this benefit
comes with its drawback. The generation of every new user thread must include creating a
corresponding kernel thread causing an overhead, which can hinder the performance of the
parent process. Windows series and Linux operating systems try to tackle this problem by
limiting the growth of the thread count.
In the above figure, one model associates that one user-level thread to a single kernel-level
thread.

Many to Many Model multithreading model


In this type of model, there are several user-level threads and several kernel-level threads. The
number of kernel threads created depends upon a particular application. The developer can
create as many threads at both levels but may not be the same. The many to many model is a
compromise between the other two models. In this model, if any thread makes a blocking
system call, the kernel can schedule another thread for execution. Also, with the introduction of
multiple threads, complexity is not present as in the previous models. Though this model allows
the creation of multiple kernel threads, true concurrency cannot be achieved by this model.
This is because the kernel can schedule only one process at a time.
Many to many versions of the multithreading model associate several user-level threads to the
same or much less variety of kernel-level threads in the above figure.

What is Peterson's Solution in OS?


Peterson's solution is a classic solution to the critical section problem. The critical
section problem ensures that no two process change or modify the value of a resource
at the same time. For example, let int a=5 and there are two processes p1 and p2 that
can modify the value of a.

What is a monitor in OS?


Monitors are a programming language component that aids in the regulation of
shared data access. The Monitor is a package that contains shared data structures,
operations, and synchronization between concurrent procedure calls. Therefore, a
monitor is also known as a synchronization tool.

Define Kernel Shell?


A shell is basically an interface present between the kernel and the user. A kernel is the very
core of a typical OS. Meaning. A shell is a CLI (command-line interpreter). A kernel is a type of
low-level program that has its interfacing with the hardware on top of which all the applications
run (disks, RAM, CPU, etc.).
What is PCB(Process Control Block)?
Process Control Block is a data structure that contains information of the process

related to it. The process control block is also known as a task control block, entry of

the process table, etc.

It is very important for process management as the data structuring for processes is

done in terms of the PCB. It also defines the current state of the operating system.

Structure of the Process Control Block


The process control stores many data items that are needed for efficient process

management. Some of these data items are explained with the help of the given

diagram −

What is context switching?


The Context switching is a technique or method used by the operating system to
switch a process from one state to another to execute its function using CPUs in the
system. When switching perform in the system, it stores the old running process's
status in the form of registers and assigns the CPU to a new process to execute its
tasks. While a new process is running in the system, the previous process must wait in
a ready queue. The execution of the old process starts at that point where another
process stopped it. It defines the characteristics of a multitasking operating system in
which multiple processes shared the same CPU to perform multiple tasks without the
need for additional processors in the system.

Where do we need to use binary semaphore and counting semaphore?


There are the scenarios in which more than one processes need to execute in critical
section simultaneously. However, counting semaphore can be used when we need to
have more than one process in the critical section at the same time.

List different types of memory in the CPU.Arrange according to speed.


The speed of devices is arranged in ascending order which is floppy being the highest then
comes hard disk then RAM and the lowest of all is cache. RAM is a computer storage device
that stores data and machine code currently being used. HARD DISK is non removable
magnetic disk with a large data capacity. CACHE is built on CPU and CPU uses this memory to
store instructions that are repeatedly used to run programs. FLOPPY disk is a hardware that
reads data storage information

Why do the hard drive names start from C and then D ?Why not A and B?

ANS- Its because A and B used to be floppy drives back in the days when floppy drives were
the norm and there were no hard-disks. The letter C was given to any hard disk that the user
installed. The drives A and B have since then been reserved for floppy drives.

In Linux, how does one become the superuser for that particular session?

There are two ways to become the superuser. The first is to log in as root directly. The second
way is to execute the command su while logged in to another user account. The su command
may be used to change one’s current account to that of a different user after entering the
proper password. It takes the username corresponding to the desired account as its argument;
root is the default when no argument is provided.After you enter the su command (without
arguments), the system prompts you for the root password. If you type the password correctly,
you’ll get the normal root account prompt (by default, a number sign: #), indicating that you
have successfully becomesuperuser and that the rules normally restricting file access and
command execution do not apply. For example:
What does Pipe in do in a Linux command?
Pipe is used to combine two or more commands, and in this, the output of one command acts
as input to another command, and this command's output may act as input to the next
command and so on. It can also be visualized as a temporary connection between two or more
commands/ programs/ processes.

You might also like