Operating System Notes
Operating System Notes
Operating System
An operating system is a program that manages the computers’ hardware.
It provides a basis for application program and acts as an intermediary between the computer user
and computer hardware.
The operating system controls and coordinates the use of the hardware among the various
application program for the various user.
A slightly different view of an operating system emphasizes the need to control the various I/O
devices and user programs. An operating system is a control program. A control program manages
the execution of user programs to prevent errors and improper use of the computer. It is
especially concerned with the operation and control of I/O devices.
1
Computer System Architecture
Computer system can be organized in a number of different ways, which we can categorize
roughly according to the number of general purpose processors used.
1. Single-Processor System
On a single processor system, there is one main CPU capable of executing a general-purpose
instruction set, including instructions from user processes. Almost all single processor
systems have other special-purpose processors as well. They may come in the form of
device-specific processors, such as disk, keyboard, and graphics controllers; or, on
mainframes, they may come in the form of more general-purpose processors, such as I/O
processors that move data rapidly among the components of the system.
2. Multiprocessor System
Multiprocessor system is also known as parallel system or multicore system or tightly
coupled system. Such system has two or more processors in close communication, sharing
the computer bus and sometimes the clock, memory, and peripheral devices.
Increased Throughput: By increasing the no of processor, we expect to get more work done
in less time. The speed-up ratio with N processor is not N, however; rather, it is less than N.
When multiple processors cooperate on a task, a certain amount of overhead is incurred in
keeping all the parts working correctly.
Economy of Scale: Multiprocessor system can cost less than equivalent multiple single-
processor system, because they can share peripherals, mass storage, and power supplies. If
several programs operate on the same set of data, it is cheaper to store those data on one
disk and to have all the processors share them than to have many computers with local disks
and many copies of the data.
Increased Reliability: If function can be distributed properly among several processors, then
the failure of one processor will not halt the system, only slow it down. If we have ten
processors and one fails, then each of the remaining nine processors can pick up a share of
work of the failed processor. Thus, the entire system runs only 10% slower, rather than
failing altogether.
Multiprogramming
An OS provides the environment within which programs are executed. One of the important
aspect of OS is the ability to multi-program. A single-program cannot, in general keep either the
CPU or the I/O devices busy all the times. Multiprogramming increase CPU utilization by organizing
jobs (code or data) so that the CPU always has one to execute.
The idea behind the multiprogramming is that the OS keeps several jobs in memory
simultaneously. The OS picks and begins to execute one of the jobs in memory. Eventually the jobs
may have to wait for some task, such as an I/O operation to complete.
In a non-multiprogramming system, the CPU would sit idle. In multiprogramming system, the OS
Simply switches to and execute another jobs. When that job needs to wait the CPU switches to
another jobs and so on. Eventually the 1st job finishes waiting and gets the CPU back. As long as at
least one job needs to execute, the CPU is never idle.
2
Operating System Services
Following are the function/services provided by the OS from the user perspective:
1. User Interface
Almost all OS have a user interface (UI). This interface can take several forms. One is a
command-line interface (CLI), which uses text command and a method for entering them.
Another is a Batch interface, in which commands and directives to control those commands
are entered into a files, and those files are executed. Most commonly, a Graphical User
Interface (GUI) is used. Here, the interface is a window system with a pointing device to
direct I/O, choose from menus and make selection and a keyboard to enter text.
2. Program Execution
The system must be able to load a program into memory and run that program. The program
must be able to end its execution, either normally or abnormally (indicating error).
3. I/O Operation
A running program may require I/O, which may involve a file or a I/O devices. For efficiency
and protection, user usually cannot control I/O devices directly. Therefore, O/s must provide
a mean to do I/O.
4. File System Manipulation
A process need to read and write files and directories. They also need to create and delete
them by name, searches for a given file, and list file information. Finally, some OS include
permission management to allow or deny access to file or directories based on file
ownership.
5. Communication
There are many circumstances in which one process needs to exchange information with
another process. Such communication may occur between process that are executing one
same computer or between processes that are executing on different computer system tied
together by a computer network. Communication may be implemented by a shared
memory, in which two or more processes read and write to share section of memory, or
message passing, in which packets of information is predefined formats are moved between
process by OS.
6. Error Detecting
For each types of error, the OS should take the appropriate action to ensure correct and
consisting computing.
3
Computer System Operation
For a computer to start running—for instance, when it is powered up or rebooted—it needs to
have an initial program to run. This initial program, or bootstrap program, tends to be simple.
Typically, it is stored within the computer hardware in read-only memory (ROM) or electrically
erasable programmable read-only memory (EEPROM), known by the general term firmware. It
initializes all aspects of the system, from CPU registers to device controllers to memory contents.
The bootstrap program must know how to load the operating system and how to start executing
that system. To accomplish this goal, the bootstrap program must locate the operating-system
kernel and load it into memory. Kernel is a program at the core of a computer’s operating system
that act as an intermediate between software and hardware and control almost every single
operation.
Process
Any program which is in execution state is known as process. A process will need certain
resources, such as CPU time, memory, files, and I/O devices to accomplish its task. These resources
are allocated to the process either when it is created or while it is executing. The OS helps us to
create, schedule, and terminates the processes which is used by CPU. A process created by the
main process is called a child process.
Process Architecture
Stack: The Stack stores temporary data like function parameters, returns addresses,
and local variables.
Heap Allocates memory, which may be processed during its run time.
Data: It contains the variable.
Text: Text Section includes the current activity, which is represented by the value of
the Program Counter.
Process State
A process state is a condition of the process at a specific instant of time. It also defines the current
position of the process. A process may be in one of the
following states:
1. New: The new process is created when a specific program
calls from secondary memory i.e. hard disk to primary memory
RAM.
2. Ready: In a ready state, the process should be loaded into
the primary memory, which is ready for execution.
3. Waiting: The process is waiting for the allocation of CPU
time and other resources for execution.
4. Running: The process is an execution state.
5. Terminated: Terminated state specifies the time when a
FIGURE 3: PROCESS STATE
process is terminated or ended.
4
*6. Blocked: It is a time interval when a process is waiting for an event like I/O operations to
complete.
*7. Suspended: Suspended state defines the time when a process is ready for execution but has
not been placed in the ready queue by OS.
The PCB stands for Process Control Block also called Task Control Block. It is a data structure that
is maintained by the Operating System for every process. The PCB should be identified by an
integer Process ID (PID). It helps us to store all the information required to keep track of all the
running processes.
It is also accountable for storing the contents of processor registers. These are saved when the
process moves from the running state and then returns back to it. The information is quickly
updated in the PCB by the OS as soon as the process makes the state transition.
A PCB keeps all the information needed to keep track of a process as listed below:
Process State: The current state of the process that may be new, ready, running, waiting, halted,
and so on.
Program Counter: The counter indicates the address of the next instruction to be
executed for this process.
CPU register: Various CPU registers where process need to be stored for execution
for running state.
CPU-Scheduling Information: This information includes a process priority, pointer
to scheduling queues, and any other scheduling parameters.
Memory Management Information: This information may include such items as
the value of the base and limits registers and the page tables, or the segment
tables, depending on the memory system used by the operating system.
Accounting Information: This information includes the amount of CPU
FIGURE 4: PROCESS CONTROL BLOCK (PCB)
used for process execution, time limits, account numbers, job or process
numbers, etc.
I/O Status Information: This information include the list of I/O devices allocated to the process, a
list of opens files, and so on.
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a time
and the loaded process shares the CPU using time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for
each of the process states and PCBs of all processes in the same execution state are placed in the
same queue. When the state of a process is changed, its PCB is unlinked from its current queue
and moved to its new state queue.
5
The Operating System maintains the following important process scheduling queues:
Job Queue: This is a queue which keeps all the process in the system.
Ready Queue: This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
Device Queue: The processes which are blocked due to unavailability of an I/O device constitute
this queue.
Each rectangular box represents a queue. Job queue, ready queue and a set of device queues. The
circles represent the resources that serve the queues, and the arrows indicate the flow of
processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected for execution, or
dispatched. Once the process is allocated the CPU and is executing, one of several events could
occur:
1. The process could issue an I/O request an then be placed in an I/O queue.
2. The Process could create a new child process and wit for the child’s terminated.
3. The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back
in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state
and is then put back in the ready queue. A process continues this cycle until it terminates, at which
time it is removed from all queues and has its PCB and resources deallocated.
Scheduler
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types:
1. Long Term Scheduler: It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from the secondary
memory and loads them into memory for execution. Process loads into the memory for CPU
scheduling.
2. Short Term Scheduler: It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria. It is the change of ready
state to running state of the process. Short-term schedulers, also known as dispatchers, make
the decision of which process to execute next. Short-term schedulers are faster than long-term
schedulers.
3. Medium Term Scheduler: Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of multiprogramming. The medium-term
6
scheduler is in-charge of handling the swapped out-processes. A running process may become
suspended if it makes an I/O request. A suspended process cannot make any progress towards
completion. In this condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out.
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.
7
CPU Scheduling
CPU Scheduling is a process of determining which process will own CPU for execution while
another process is on hold. The main task of CPU scheduling is to make sure that whenever the
CPU remains idle, the OS at least select one of the processes available in the ready queue for
execution. The selection process will be carried out by the CPU scheduler. It selects one of the
processes in memory that are ready for execution.
Types of CPU Scheduling-
1. Preemptive Scheduling: In Preemptive Scheduling, the tasks are mostly assigned with their
priorities. Sometimes it is important to run a task with a higher priority before another lower
priority task, even if the lower priority task is still running. The lower priority task holds for some
time and resumes when the higher priority task finishes its execution.
2. Non-Preemptive Scheduling: In this type of scheduling method, the CPU has been allocated to a
specific process. The process that keeps the CPU busy will release the CPU either by switching
context or terminating. It is the only method that can be used for various hardware platforms.
That's because it doesn't need special hardware (for example, a timer) like preemptive scheduling.
When Scheduling is Preemptive or Non-Preemptive?
To determine if scheduling is preemptive or non-preemptive, consider these four parameters:
1. A process switching from the running to the waiting state. ➔ Non-Preemptive
2. Specific process switches from the running state to the ready state. ➔ Preemptive
3. Specific process switches from the waiting state to the ready state. ➔ Preemptive
4. Process finished its execution and terminated. ➔ Non-Preemptive
Dispatcher
It is a module that provides control of the CPU to the process. The Dispatcher should be fast so
that it can run on every context switch. Dispatch latency is the amount of time needed by the CPU
scheduler to stop one process and start another.
Function performed by Dispatcher:
1. Context Switching
2. Switching to user mode
3. Moving to the correct location in the newly loaded program.
8
Types of CPU Scheduling Algorithm
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF)
3. Priority Scheduling
4. Round Robin Scheduling
Example:
Consider the following set of processes that arrives in ready queue at the time 0, with the length of
the CPU burst given in millisecond.
If the processes arrive in the order P1, P2, P3, and are server FCFS order, we get the result shown
in the following Gantt Chart, which is a bar chart that illustrate a particular schedule, including the
start and finish times of each of the participating processes.
9
P1 P2 P3
0 24 27 30
Gantt Chart
P2 P3 P1
0 3 6 30
Gantt Chart
Thus, the average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the processes’ CPU burst times vary greatly.
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:
P4 P1 P3 P2
0 3 9 16 24
Gantt Chart
By comparison, if we were using the FCFS scheduling scheme, the average waiting time would be
10.25 milliseconds.
If two process which have same burst time then, it will follow the FCFS scheduling algorithm for
that two process. For example; here Burst time of P2 is now 3 milliseconds instead of 8
milliseconds.
11
Process Burst Time
P1 6
P2 3
P3 7
P4 3
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:
P2 P4 P1 P3
0 3 6 12 19
Gantt Chart
Example:
Consider the following four processes, with the length of the CPU burst given in millisecond and
with its arrival time. So, in this case processes will preempt.
P1 P2 P4 P1 P3
0 1 5 10 17 26
Gantt Chart
12
Here, at time t=0 millisecond P1 arrive in ready queue and starts its execution. But after 1
millisecond P2 arrives and P1 gets preempt and its burst time reduced to 7 milliseconds. Since P2
having lower Burst Time among all other processes i.e. 4 milliseconds, will finishes its execution
first, till t=5 milliseconds, because although it is SJF algorithm. Till that time all the processes get its
place in ready queue. And after P2, P4 with burst time 5 milliseconds, start its execution and
terminate at t=10 millisecond. Then P1 will execute and finishes its remaining job till t=17
milliseconds. Finally, P3 finishes its execution for 9 milliseconds till t=26 milliseconds.
Non-preemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
Priority Scheduling
Priority Scheduling is a method of scheduling processes that is based on priority. In this algorithm,
the scheduler selects the tasks to work as per the priority.
The processes with higher priority should be carried out first, whereas jobs with equal priorities
are carried out on a round-robin or FCFS basis. Priority depends upon memory requirements, time
requirements, etc.
The SJF algorithm is a special case of the general priority-scheduling algorithm. In which priority (p)
is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority,
and vice versa.
Priority scheduling can either be preemptive or non-preemptive. A major problem with this
scheduling algorithm is indefinite blocking, or starvation. A process that is ready to run but
waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low
priority processes waiting indefinitely. In a heavily loaded computer system, a steady stream of
higher-priority processes can prevent a low-priority process from ever getting the CPU. Generally,
one of the two things will happen. Either the process will eventually be run, or the computer
system will eventually crash and lose all unfinished low-priority process.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging involves
gradually increasing the priority of processes that wait in the system for a long time.
Example:
Consider the following five processes, with the length of the CPU burst given in millisecond along
with their priority.
Process Priority Burst Time
P1 3 10
P2 1 1
P3 4 2
P4 5 1
P5 2 5
Using priority scheduling, we would schedule these processes according to the following Gantt
chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
Gantt Chart
This scheduling could have been preemptive if the arrival time had also been given.
14
Round Robin Scheduling
The name of this algorithm comes from the round-robin principle, where each person gets an
equal share of something in turns. It is the oldest, simplest scheduling algorithm, which is mostly
used for multitasking and time-sharing system.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue for a limited
time slice or time quantum that is generally from 10 to 100 milliseconds in length. This algorithm
also offers starvation free execution of processes.
It is similar to FCFS scheduling, but preemption is added to enable the system to switch between
processes.
To implement RR scheduling, we again treat the ready queue as a FIFO queue of processes. New
processes are added to the tail of the ready queue. The CPU scheduler picks the first process from
the ready queue, sets a timer to interrupt after 1 time-quantum, and dispatches the process. One
of two things will then happen:
The process may have a CPU burst of less than 1 time-quantum. In this case, the process itself will
release the CPU voluntarily. The scheduler will then proceed to the next process in the ready
queue.
If the CPU burst of the currently running process is longer than 1 time-quantum, the timer will go
off and will cause an interrupt to the operating system. A context switch will be executed, and the
process will be put at the tail of the ready queue. The CPU scheduler will then select the next
process in the ready queue.
15
Example:
Consider the following set of processes that arrive at time 0, with the length of the CPU burst given
in milliseconds and time quantum is 4 milliseconds.
Using RR scheduling, we would schedule these processes according to the following Gantt chart:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Gantt Chart
System Call
A system call is a mechanism that provides the interface between a process and the operating
system. It is a programmatic method in which a computer program requests a service from the
kernel of the OS.
System call offers the services of the operating system to the user programs via API (Application
Programming Interface). System calls are the only entry points for the kernel system.
Example:
For example, if we need to write a program code to read data from one file, copy that data into
another file. The first information that the program requires is the name of the two files, the input
and output files. In an interactive system, this type of program execution requires some system
calls by OS:
First call is to write a prompting message on the screen.
Second, to read from the keyboard, the characters which is define the files.
16
How System Call Works?
1. The processes executed in the user mode till the time a system call interrupts it.
2. After that, the system call is executed in the kernel-mode on a priority basis.
3. Once system call execution is over, control returns to the user mode.
4. The execution of user processes resumed in kernel mode.
2. File Management
File management system calls handle file manipulation job like creating a file, reading, and
writing, etc.
Function:
Create a file
Delete fie
Open and Close file
Read, Write, and Reposition
Get and set file attributes
Ex: For UNIX OS – open(), read(), write), close()
For Windows OS – CreateFile(), ReadFile(), WriteFile(), CloseHandle().
17
3. Device Management
Device management does the job of device manipulation like reading from device buffers,
writing into device buffers, etc.
Function:
Request and release device
Logically attach/detach devices
Get and Set device attributes
Read, Write, Reposition
Ex: For UNIX OS – ioctl(), read(), write()
For Windows OS – SteConsoleMode(), ReadConsole(), WriteConsole()
4. Information Maintenance
It handles information and its transfer between the OS and user program.
Function:
Get or set time and date
Get process and
5. attributes
Get or set system data
Ex: For UNIX OS – getpid(), alarm(), sleep()
For Windows – GetCurrentProcessID(), SetTimer(), Sleep()
6. Communication
These types of system calls are specially used for inter-process communications.
Function:
Create, delete communication connection
Send, receive message
Help OS to transfer status information
Attach or detach remote device
Ex: For UNIX OS – pipe(), shm_open(), mmap()
For Windows OS – CreatePipe(), CreateFileMapping(), MapViewOfFile()
7. Protection
Provides a mechanism for controlling access to the resources provided by a computer
system.
Function:
Manipulate the permission settings of resource such as files and disks.
Allows user to allow or deny to the access of certain particular resource.
Ex: For UNIX OS – chmod(), umask(), chown()
For Windows OS – SetFileSecurity(), InitlializeSecurityDescriptor(),
SetSecurityDEscriptorGroup()
18
Important System Calls in OS
fork()
Processes use this system call to create processes that are a copy of themselves. With the help of
this system Call parent process creates a child process, after fork, both parent and child executes
the same program but in separate processes. It is primary method of process creation in UNIX like
OS.
exec()
After a fork() system call, one of the two processes typically uses the exec() system call to replace
the process’s memory space with a new program. The exec() system call loads a binary file into
memory (destroying the memory image of the program containing the exec() system call) and
starts its execution. In this manner, the two processes are able to communicate and then go their
separate ways. The parent can then create more children; or, if it has nothing else to do while the
child runs, it can issue a wait () system call to move itself off the ready queue until the termination
of the child.
wait()
In some systems, a process needs to wait for another process to complete its execution. This type
of situation occurs when a parent process creates a child process, and the execution of the parent
process remains suspended until its child process executes.
The suspension of the parent process automatically occurs with a wait() system call. When the
child process ends execution, the control moves back to the parent process.
exit()
The exit() system call is used to terminate program execution. Specially in the multi-threaded
environment, this call defines that the thread execution is complete. The OS reclaims resources
that were used by the process after the use of exit() system call.
Interprocess Communication
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
Independent Process: Execution of one process does not affect the execution of the other
processes and does not share any kind of data with each other.
Cooperative Process: Execution of one process affect the execution of other processes because
they share common variable, memory, code or resources parallelly.
For cooperating processes, we need Interprocess Communication.
There are several reason for providing an environment that allows process cooperation.
1. Information sharing: Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to
such information.
19
2. Computation speedup: if we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a
speedup can be achieved only if the computer has multiple processing cores.
3. Modularity: We may want to construct the system in a modular fashion, dividing the system
function into separate processes or threads.
4. Convenience: Even an individual user may work on many task at the same time. For instance,
a user may be editing, listening to music, and compiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanism that will allow
them to exchange data and information. There are two fundamental models of interprocess
communication: shared memory and message passing.
Shared Memory
In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region. Shared memory can be faster than message passing, since message-passing systems are
typically implemented using system calls and thus require the more time-consuming task of kernel
intervention. In shared-memory systems, system calls are required only to establish shared
memory regions. Once shared memory is established, all accesses are treated as routine memory
accesses, and no assistance from the kernel is required.
Message Passing
In the message-passing model, communication takes place by means of messages exchanged
between the cooperating processes. Message passing is useful for exchanging smaller amounts of
data, because no conflicts need be avoided. Message passing is also easier to implement in a
distributed system than shared memory.
Process Synchronization
Process Synchronization is the task of coordinating the execution of processes in a way that no
two processes can have access to the same shared data and resources.
It is specially needed in multiprocessing system when multiple processes are running together, and
more than one processes try to gain access to the same shared resource or data at the same time.
This can lead to the inconsistency of shared data. So the changes made by one process not
necessarily reflected when other processes accessed the same shared data. To avoid this type of
inconsistency of data, the process need to be synchronized.
Process synchronization problem arises in the case of cooperative process, because resources are
shared in cooperative processes.
20
Race Condition
When more than one processes are executing the same code or accessing the same memory or
any shared variable parallelly, in that condition there is a possibility that the output or the value
of the shared variable is wrong so for that all the processes doing the race to say that my output
is correct this condition known as a race condition. Mainly this condition is a situation that may
occur inside the critical section.
Example: Let P1 is the process which is trying to add 1 in a variable X, (where X = 5), whereas P2 is
a process which is trying to subtract 1 from the same variable X.
Let assume for some reason before printing the value of X from 5 to 6 P1 gets preempt and CPU is
now allotted to process P2. In this case now P2 will starts its execution and subtract 1 from the X =
5 and makes it 4 but again before printing it also gets preempt. Now again due to context
switching CPU gets allotted to P1 and it continues its execution from where preemption occurred
and it print X = 6. After the completion of process P1 again CPU is now allotted to P2 and it also
stats it left execution and print X = 4
But, due to race condition the value of X get printed wrong, actually the final value that should be
printed is 5.
So, to overcome from this situation we need to synchronize the processes.
Critical Section
Each process has a segment of code in which the process may be changing common variables,
updating a table, writing a file, and so on, or a section that consist of shared data resources that
can be accessed by other processes, is called critical section. The important feature of the system
is that, when one process is executing in critical section, no other process is allowed to execute in
it.
Entry Section: It is part of process which decides the entry of a particular process.
Exit Section: It allows the other process that are waiting in the Entry Section, to enter into Critical
Section. It also checks that a process that finished its execution should be removed through this
section.
Remainder Section: All the other parts of the code, which is not in Critical, Entry, and Exit Section,
are known as the Remainder Section.
There are some conditions that must be satisfy by any synchronization method (or critical-section
problem) to achieve process synchronization.
21
1. Mutual exclusion: If a process is executing in its section, then no other process is allowed to
execute in the critical section i.e. out of a group of cooperating processes, only one process
can be in its critical section at a given point of time.
2. Progress: If no process is executing in its critical section and some processes wish to enter
their critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter its critical section next, and this selection
cannot be postponed indefinitely.
No process running outside the critical section should lock the other process for entering in
critical section.
3. Bounded waiting: There exists a bound, or limit, on the number of times that other
processes are allowed to enter critical section after a process has a made a request to enter
its critical section and before that request is granted.
4. No assumption related to no of CPU and Hardware speed
Deadlock
In a multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources; if the resources are not available at that time, the process
enters a waiting state. Sometimes, a waiting process is never again able to change state, because
the resources it has requested are held by other waiting processes. This situation is called a
deadlock.
Necessary Conditions for Deadlock
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to acquire
additional resource that are currently being held by other processes.
3. No preemption: Resource cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait: A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is waiting for a
resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is waiting for a resource
held by Pn, and Pn is waiting for a resource held by P0.
22
• Also this graph contains all the information that is related to all the instance of the resource
which means the information about available resource that recourse which are being used by
the process.
• In the graph, circle is used to represent the process, and the rectangle is used to represent
the resource.
23
Resource Vertices
(Single Instance)
Process Vertices
Resource Vertices
(Multiple Instance)
Processes in Deadlock
R1
P1 P2
R2
FIGURE 11: SINGLE INSTANCE RESOURCE ALLOCATION GRAPH IN DEADLOCK
If there is a cycle in single instance resource allocation graph, then there must be a deadlock.
Here, in the above RAG (Fig 11) resource R2 is assigned/allocated to the process P1 but P1 is
requesting for the resource R1. Also resource R1 is assigned to process P2 but P2 is requesting for
the resource R2. So the above RAG forms a cycle and hence P1 & P2 is in deadlock.
P1 → R1 → P2 → R2 → P1
R1 R2
P1 P2 P3
FIGURE 12: SINGLE INSTANCE RAG WITHOUT DEADLOCK 24
Here in the above example (Fig 12), processes P1 and P2 acquiring resources R1 and R2 while
process P3 is waiting to acquire both resources. In this example, there is no deadlock because
there is no circular dependency.
So cycle in single-instance resource type is the sufficient condition for deadlock.
Here in the above multiple instance RAG (Fig 13) there is two cycle, in which resource R2 is
assigned to process P1 & P2 and P1 is waiting for R1, R1 is assigned to process P2 and P2 is waiting for
resource R3 and resource R3 is assigned to process P3 but P3 is waiting for R2. Hence all the
processes (P1, P2 and P3) are in deadlock.
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2
25
Now consider the resource allocation graph in Fig 14. In this example, we also have cycle:
P1 → R1 → P3 → R2 → P1
However, there is no deadlock. Process P4 may release its instance of resource type R2. That
resource can then be allocated to P3, breaking the cycle.
In summary, if a resource allocation graph does not have a cycle, then the system is not in a
dead lock state. If there is a cycle, then the system may or may not be in deadlocked state,
especially in the case of multiple instance resource.
26