OSModule II
OSModule II
SECTION A
SECTION B
SECTION D
Cooperating Process
Convenience: Even an individual user may have many task on which to work at
one time.
There are a number of applications where processes need to communicate with each
other. Processes can communicate by passing information to each other via shared
memory or by message passing.
Files
Files are the most obvious way of passing information. One process writes a file, and
another reads it later. It is often used for IPC.
Shared Memory
When processes communicate via shared memory they do so by entering and retrieving
data from a single block of physical memory that designated as shared by all of them.
Each process has direct access to this block of memory.
Message Passing
Message passing is a more indirect form of communication. Rather than having direct
access to a block of memory, processes communicate by sending and receiving
packets of information
Processes that are working together often share some common storage that one can
read and write. The shared storage may be in main memory or it may be a shared file.
Each process has segment of code, called a critical section, which accesses shared
memory or files. The key issue involving shared memory or shared files is to find way to
prohibit more than one process from reading and writing the shared data at the same
time.
The important feature of the system is that, when one process is executing in its critical
section, no other process is to be allowed to execute in its critical section. That is, no
A solution to the critical section problem must satisfy the following three requirements
1.Mutual Exclusion
If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress
If no process is executing in its critical section and there exist some processes that wish
to enter their critical section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely.
3. Bounded Waiting
A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and
before that request is granted.
A global variable turn is used to control the access to the shared resources, turn can
take two values 0 or 1 to indicate which process has entered the critical section. Each
process before entering the critical section, checks the value of turn to see if the other
process is in the critical section.
For example: If turn=p0;
Then p1 refrains from entering the critical section until the value of turn become p1.
Drawback
Suppose process p1 enters the critical section, after completing the critical section it set
the value of turn as p2 and then continues with the remainder section. During this time
p2 enters the critical section, once it completes the critical section it sets the value of
turn as p1 and then enters the remainder section. Suppose if p1 has crashed in its
remainder section, it cannot enters the critical section and cant set the turn as p2, which
prevents p2 also from entering the critical section. This algorithm satisfies mutual
exclusion, but not progress.
Algorithm 2
In this algorithm we use array:
boolean flag[2];
The elements of array are initialized to false. If flag[i] is true, this value indicates that pi
is ready to enter the critical section. Then, pi checks to verify that process pj is in the
critical section if so then pi would wait until flag[j] =false.
Process P i
do
{
flag[i]= true;
while (flag[j])
{
/*do nothing*/
}
critical section
flag[i] = false;
remainder section
} while (true)
In this algorithm each processes has its own flag variable to indicate whether it has
entered the critical section. For eg: if p1 wishes to enter the critical section, it first
Algorithm 3
By combining the key ides of alorithm1 and algorithm 2, we obtain a correct solution to
the critical section problem, where all the three requirements are met. This solution to
the critical-section problem is also known as Peterson's solution. Peterson's
solution is restricted to two processes that alternate execution between their critical
sections and remainder sections. The processes are numbered Po and P1. For
convenience, when presenting Pi, we use Pj to denote the other process; Peterson's
solution requires two data items to be shared between the two processes:
boolean flag[2];
int turn;
Process P i
To enter the critical section, process Pi, first sets flag[i] to be true and then sets turn to
the value j, thereby asserting that if the other process wishes to enter the critical
section, it can do so. If both processes try to enter at the same time, turn will be set to
both i and j at roughly the same time. Only one of these assignments will last; the other
will occur but will be overwritten immediately. The eventual value of turn decides which
of the two processes is allowed to enter its critical section first.
wait(s)
{
while (S<=0)
{
/*do nothing*/
S= S-1;
}
signal(S)
{
S = S + 1;
}
Semaphore Usage
We can use semaphore with the n process critical section problem. The n process
share a semaphore, mutex (standing for mutual exclusion), initialized to one. We can
also use semaphore to solve various synchronization problems,.
For eg: consider two concurrently running processes: p1 with a statement s1 and p2
with a statement s2. Suppose we require that s2 be executed only after s1 has
completed. We can implement this scheme readily with by letting p1 and p2 share a
common semaphore synch, initialized to 0, and inserting the statement.
S1;
Signal(synch);
in process p1 and the statements
Wait(synch);
S2;
in process p2.
Semaphore mutex;
Initially mutex = 1
do
{
wait (mutex);
Two counting semaphores are used for this. Use one semaphore empty to count the
empty slots in the buffer.
Initialize the semaphore to N
A producer must wait on this semaphore before writing to the buffer
A consumer will signal this semaphore after reading from the buffer
Semaphore called full to count the number of data items in the buffer:
Initialize the semaphore to 0
A consumer must wait on this semaphore before reading from the buffer
A producer will signal this semaphore after writing to the buffer
In our problem, the producer and consumer processes share the following data
structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0;
The pool consists of n buffers, each capable of holding one item. The mutex semaphore
provides mutual exclusion for accesses to the buffer pool and is initialized to the value
1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the
value 0. The symmetry between the producer and the consumer can be interpreted as
the producer producing full buffers for the consumer or as the consumer producing
empty buffers for the producer.
There is a shared resource which should be accessed by multiple processes. There are
two types of processes. They are reader and writer. Any number of readers can read
from the shared resources simultaneously, but only one writer can write to the
For example if two readers access the shared resource at the same time there is no
problem. However if two writers or a reader and writer access the object at the same
time there may be problems. To solve this writer should get exclusive access to an
objective, when a writer is accessing the object, no reader or writer may access it,
however multiple reader can access the object at the same time. This can be
implemented using semaphores.
The semaphore mutex and wrt are initialized to 1;read count (rc) is initialized to 0.The
semaphore wrt is common to both the reader and writer processes. The mutex
semaphore is used to ensure mutual exclusion when the variable rc is updated .The rc
variable keeps track of how many processes are currently reading the object. The
semaphore wrt functions as a mutual-exclusion semaphore for the writers. It is also
used by the first or last reader that enters or exit the critical section.
semaphore mutex = 1;
semaphore wrt = 1;
int rc = 0;
Reader process
wait( mutex);
rc++;
if(rc==1)
wait(wrt)
signal(mutex)
//read object
Wait(mutex)
rc—
if(rc==0)
signal(wrt)
signal(mutex)
writer process
wait(wrt)
//write
Signal(wrt)
Under the normal mode of operation, a process may utilize a resource in only the
following sequence:
1. Request: The process requests the resource. If the request cannot be granted
immediately(for example, if the resource is being used by another process), then the
requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource (for example, if the resource is a
printer, the process can print on the printer).
The request and release of resources may be system calls, Examples are the request()
and release() device, open() and close() file, and allocate() and free() memory system
calls. Similarly, the request and release of semaphores can be accomplished through
the wait() and signal() operations on semaphores or through acquire() and release() of a
mutex lock.
A deadlock situation can arise if the following four conditions hold simultaneously in a
system:
1. Mutual exclusion: At least one resource must be held in a non sharable mode; that
is, only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes.
4. Circular wait: A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2,..., Pn−1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.
We emphasize that all four conditions must hold for a deadlock to occur. The
circular-wait condition implies the hold-and-wait condition, so the four conditions are not
completely independent.
Resource-Allocation Graph
Resource instances:
◦ One instance of resource type R1
◦ Two instances of resource type R2
◦ One instance of resource type R3
◦ Three instances of resource type R4
Process states:
◦ Process P1 is holding an instance of resource type R2 and is waiting for an
instance of resource type R1.
◦ Process P2 is holding an instance of R1 and an instance of R2 and is waiting
for an instance of R3.
◦ Process P3 is holding an instance of R3.
Processes P1, P2, andP3 are deadlocked. Process P2 is waiting for the resource R3,
which is held by process P3. ProcessP3 is waiting for either process P1 or process P2
to release resource R2. In addition, process P1 is waiting for process P2 to release
resource R1.
Now consider the resource-allocation graph in Figure below. In this example, we also
have a cycle: P1 → R1 → P3 → R2 → P1
To ensure that deadlocks never occur, the system can use either a deadlock-
prevention or a deadlock-avoidance scheme.
DEADLOCK PREVENTION
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring
that at least one of these conditions cannot hold, we can prevent the occurrence of
a deadlock.
Mutual Exclusion
The mutual exclusion condition must hold. That is, atleast one resource must be non
sharable. Sharable resources, in contrast, do not require mutually exclusive access and
thus cannot be involved in a deadlock. Read-only files are a good example of a
sharable resource. If several processes attempt to open a read-only file at the same
time, they can be granted simultaneous access to the file. A process never needs to
wait for a sharable resource.
To ensure that the hold-and-wait condition never occurs in the system, we must
guarantee that, whenever a process requests a resource, it does not hold any other
resources. One protocol that we can use requires each process to request and be
allocated all its resources before it begins execution. An alternative protocol allows a
process to request resources only when the none. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated.
The second method allows the process to request initially only the DVD drive and disk
file. It copies from the DVD drive to the disk and then releases both the DVD drive and
the disk file. The process must then request the disk file and the printer. After copying
the disk file to the printer, it releases these two resources and terminates.
Both these protocols have two main disadvantages. First, resource utilization may
be low, since resources may be allocated but unused for a long period. Second,
starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to
some other process.
No Preemption
To ensure that this condition does not hold, we can use the following protocol. If a
process is holding some resources and request another resource that cannot be
immediately allocated to it, then all resources currently being held are preempted and
added to the list of resources for which the process is waiting then the process will be
restarted only when it can regain its old resources as well as the new ones that it is
requesting.
Circular wait
One way to ensure that this condition never holds is to impose a total ordering of all
resources types and to require that each process requests resources in an increasing
order of enumeration.
Let R={ R1,R2,R3,……Rm} be the set of resource type we assign each resource type a
unique integer number, which allows us to compare two resources and to determine
whether one precedes another in ordering.
A process can initially request any number of instances of a resource type , say R i after
that , the process can request instances of resource type Rj if and only if F(Rj>F(Ri).
Alternatively, we can require that a process requesting an instance of resource type Rj
must have released any resources Ri such that F(Ri) ≥ F(Rj).
If these two protocols are used then the circular wait condition cannot hold.
DEADLOCK AVOIDANCE
It is similar to resource allocation graph, but in addition to request edge and assignment
edge, we introduce a new type of edge called a claim edge. A claim edge Pi → Rj
indicates that process Pi may request resource Rj at some time in the future. This edge
resembles a request edge in direction but is represented in the graph by a dashed line.
When process Pi requests resource Rj, the claim edge Pi → Rj is converted to a
request edge. Similarly, when a resource Rj is released by Pi, the assignment edge
Rj → Pi is reconverted to a claim edge Pi → Rj.
If no cycle exists, then the allocation of the resource will leave the system in a safe
state. If a cycle is found, then the allocation will put the system in an unsafe state.
Before converting a claim edge to request edge it will check whether the system will
enter into dead lock cycle. If so the resource will not be assigned as such.
Banker’s Algorithm
Several data structures must be maintained to implement the banker’s algorithm. These
data structures encode the state of the resource-allocation system. We need the
following data structures, where n is the number of processes in the system and m is
the number of resource types:
If Max[i][j] equals k, then process Pi may request at most k instances of resource type
Rj.
• Allocation : An n×m matrix defines the number of resources of each type currently
allocated to each process.
If Need[i][j] equals k, then process Pi may need k more instances of resource type Rj to
complete its task. Note that Need[i][j] equals Max[i][j] −Allocation[i][j].
Safety Algorithm
This algorithm is used for finding out whether or not a system is in a safe state. This
algorithm can be described as follows:
Step2: Find an index i such that Finish[i] ==false And Needi ≤Work
Finish[i]=true Go to step 2.
Step 4: If Finish[i] ==true for all i, then the system is in a safe state.
This algorithm may require an order of m×n 2 operations to determine whether a state is
safe, whereas resource-allocation algorithm need n2 operations.
Resource-Request Algorithm
If Request i [j] == k, then process Pi wants k instances of resource type Rj. When a
request for resources is made by process Pi, the following actions are taken:
Step 3: The system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:
Available = Available–Requesti;
Deadlock Detection
• An algorithm that examines the state of the system to determine whether a deadlock
has occurred
To detect and recover from a dead lock, the system must use deadlock detection
algorithm to check whether a dead lock has occurred and between which processes.
Then it should use a deadlock recovery mechanism to recover from the dead lock.
If all resources have only a single instance, then we can define a deadlock- detection
algorithm that uses a variant of the resource-allocation graph, called a wait-for graph.
We obtain this graph from the resource-allocation graph by removing the resource
nodes and collapsing the appropriate edges.
• Allocation: An n×m matrix defines the number of resources of each type currently
allocated to each process.
Algorithm
For i=0,1,...,n–1,
a. Finish[i] ==false
Finish[i]=true Go to step 2.
Deadlock Recovery
1. To inform the operators or users of the system that a deadlock has occurred and
let them to do it manually.
2. To let the system to recover from the deadlock automatically.
In both cases we either abort one or more process to break the circular wait or preempt
resources of the deadlocked processes.
a) Process Termination
To eliminate deadlocks by aborting processes we can use one of thee two methods.
I. Abort all deadlocked processes.
II. Abort one process at a time until the deadlock cycle is eliminated.
b) Resource preemption
To eliminates deadlocks the system preempts resources from certain deadlocked
processes and allocate it to other process so that the deadlock cycle can be broken.
If preemption is required for to deal with deadlocks three things has to consider.
1. Select the victim
Determine the order of preemption to minimize cost. The cost factor means that
nu8mber of resources a deadlock process is holding and its execution time.
2. Roll back
Roll back the process to safe state and restart it from that state.
3. Starvation
In a system the victim selection is based on cost factors, it may happen that the same
process is always picked as the victim. As a result this process never completes its
designated task results in starvation.
SECTION B
1. Define deadlock. (2016,2015)
2. Define IPC? (2019)
3. Explain mutual exclusion. (2019)
4. How to detect deadlock when there is single instance of each resource type?
(2019)
5. What are the two factors to depend when we invoke dead lock detection algorithm?
(2019)
SECTION C
SECTION D
1. Explain deadlock avoidance algorithms. (2016,2015)
2. Explain Banker’s algorithm to avoid deadlocks. (2019)
I/O burst: a time interval when a process uses I/O devices only.
4. What is throughput?
A rectangle marked off horizontally in time units, marked off at end of each job or job-segment.
It shows the distribution of time-bursts in time. It is used to determine total and average statistics
on jobs processed, by formulating various scheduling algorithms on it.
Provably optimum in waiting time. But no way to know length of next CPU burst.
It is also called starvation. A process with low priority that never gets a chance to execute. Can
occur if CPU is continually busy with higher priority jobs.
9. What is “aging”?
A preemptive scheduling algorithm that gives high priority to a job with least amount of CPU
burst left to complete.
Each job is given a time quantum slice to run; if not completely done by that time interval, job is
suspended and another job is continued. After all other jobs have been given a quantum, first
job gets its chance again.
To design an algorithm that allows at most one process into the critical section at a time, without
deadlock.
13. What is the main advantage of the layered approach to system design?
As in all cases of modular design, designing an operating system in a modular way has several
advantages. The system is easier to debug and modify because changes affect only limited
sections of the system rather than touching all sections of the operating system. Information is
kept only where it is needed and is accessible only within a defined and restricted area, so any
bugs affecting that data must be limited to a specific module or layer.
14. Describe the differences among short-term, medium-term, and long-term scheduling.
Short-term (CPU scheduler) - selects from jobs in memory, those jobs which are ready to
execute, and allocates the CPU to them.
Long-term (job scheduler) - determines which jobs are brought into memory for processing.
CPU cycles, memory space, files, I/O devices, tape drives, printers.
A situation where every process is waiting for an event that can be triggered only by another
process.
17. What are the four necessary conditions needed before deadlock can occur?
b. A process holding at least one resource is waiting for more resources held by other
processes.
b. Abort a process.
“starved.”