Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

OS Unit-2 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

R20 CMR Technical Campus

UNIT-II
Process and CPU Scheduling - Process concepts and scheduling, Operations on
processes, Cooperating Processes, Threads, and Interprocess Communication,
Scheduling Criteria, Scheduling Algorithms, Multiple-Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec

1. Process Concepts:
Process is a program in execution. A program by itself is not a process; a
program is a passive entity, such as a file containing a list of instructions stored on
disk (often called an executable file), whereas a process is an active entity, with a
program counter specifying the next instruction to execute and a set of associated
resources. A program becomes a process when an executable file is loaded into
memory. Two common techniques for loading executable files are double-clicking
anicon representing the executable file and entering the name of the executable
file on the command line (as in prog. exe or a. out.)
Program Process

A program is a set of instructions. When a program is executed, it is known


as process.
A program is a passive/static A process is an active/dynamic entity.
entity
A process has a limited lifespan. It is
A program has a longer life span.
created when execution starts and
It is stored on the hard disk in the
terminates when execution is finished.
computer.

Syeda Sumaiya Afreen pg.


1
R20 CMR Technical Campus

Process memory is divided into four sections for efficient working:

Stack

Heap
Data
Text

Fig: Process in Memory

• The Text section contains the compiled code of program code.

• The Data section contains the global and static variables required to run the
process.

• The Heap is used for the dynamic memory allocation, and is managed via
calls to new, delete, malloc, free,etc.
• The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared. It also deals with function calls.

PROCESS STATE DIAGRAM

Processes in the operating system can be in any of the following five states:

•NEW: A program present in secondary memory and wants to execute. It will


be picked up by OS and loaded into the main memory latter is called a new
process.
•READY: The process is loaded into the main memory by OS. The process
present in the main memory is called ready state process. There can be many

Syeda Sumaiya Afreen pg.


2
R20 CMR Technical Campus

processes in the ready state.


•RUNNING: One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm and given to the processor. The
instructions within this process are executed by the processor and that

process issaid to be in running state.

•WAITING: Whenever the running process requests access to I/O or needs other
events to occur, it enters into the waiting state.

•TERMINATED: Whenever the running process execution completes or


aborted in the middle, the process enters into terminated state.

Fig: Diagram of process state

PROCESS CONTROL BLOCK

Process Control Block (PCB) is a data structure that contains in formation of the
process related to it. The process control stores many data items that are needed for

Syeda Sumaiya Afreen pg.


3
R20 CMR Technical Campus

efficient process management. These data items are explained with the help of the
given diagram:

ProcessID
ProcessState
ProgramCounter
CPURegisters
ListOfOpenFiles
.. .
CPUSchedulingInformat
ion
MemoryManagementInf
ormation
I/OStatusInformation
AccountingInformation
Fig : Process control block

1. Process ID: A unique identification number associated with each process.

2. Process State: This specifies the process state i.e. new, ready, running, waiting
or terminated.
3. Program Counter: This contains the address of the next instruction that needs
to be executed in the process.
4. CPU Registers: This specifies the registers that are used by the process. They

Syeda Sumaiya Afreen pg.


4
R20 CMR Technical Campus

may include accumulators, index registers, stack pointers, general purpose


registers etc.
5. List of Open Files: These are the different files that are associated with the
process.

6. CPU Scheduling Information: The process priority, pointers to scheduling


queues etc. is present in the CPU scheduling information. This may also include
other scheduling parameters.
7. Memory Management Information: The memory management information
includes thepage tables or the segment tables depending on the memory system
used. It also contains the value of the base registers, limit registers etc.
8. I/O Status Information: This information includes the list of I/O devices used
by the process, open file tables etc.
9. Accounting information: The time limits, account numbers, amount of CPU
used, process numbers etc. are all a part of the PCB accounting information.

PROCESS SCHEDULING
The act of determining which process is in the ready state should be moved to the
running State is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the
time and to deliver minimum response time for all programs. For achieving this, the
scheduler must apply appropriate rules for swapping processes IN and OUT of
CPU. Scheduling fell into one of the two general categories:

• Non Pre-emptive Scheduling: in non-preemptive scheduling, if the CPU is


allocated, then it will not be taken back until the process completes its
execution.
• Pre-emptive Scheduling: In preemptive scheduling, the CPU can be taken

Syeda Sumaiya Afreen pg.


5
R20 CMR Technical Campus

back from the process at any time during the execution of the process.

Scheduling Queues

• All processes , upon entering into the system, are stored in the Job Queue.

• Processes in the Ready state are placed in the Ready Queue.

• Processes waiting for a device to become available are placed in Device


Queues. There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready
queue until it is selected for execution (or dispatched) as per scheduling
algorithm. Once the process is assigned to the process or and is executing, one
of the following events can occur:

• The process could issue an I/O request, and then be placed in the I/O
queue.(The process is placed in waiting state).
• The process could create an sub-process and wait for its termination. (The
process is placed in waiting state).
• The process could be removed forcibly from the CPU, as a result of an

Jobqueue

Syeda Sumaiya Afreen pg.


6
R20 CMR Technical Campus

interrupt, and be put back in the ready queue. (The process is placed in
ready state).

Fig: Queueing-diagram representation of process scheduling

Types of Schedulers

There are three types of schedulers available. They are:


1. Long Term Scheduler

2. Short Term Scheduler

3. Medium Term Scheduler

Long Term Scheduler: It is also called a job scheduler. It selects processes from
the job queueand loads them into memory for execution. The primary objective of
the job scheduler is toprovide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming.

Short Term Scheduler: It is also called as CPU scheduler. It selects one of the
processes in the ready queue as per scheduling algorithm and allocates process or to
execute it. It shifts the process from ready state to running state. Its main objective
is to increase system performance. Short-term schedulers are faster than long-term
schedulers.

Syeda Sumaiya Afreen pg.


7
R20 CMR Technical Campus

Medium Term Scheduler: Medium-term scheduling is a part of swapping. When


a process is in waiting state, it cannot make any progress towards completion. In this
condition, the waiting process is moved from main memory into secondary memory
to free up some main memory space to load other process. This process is called
swapping, and the process is said to be swapped out or rolled out.

Long-TermScheduler Short-TermScheduler Medium-


TermScheduler
It is a process
It is a job scheduler It is a CPU scheduler
swapping scheduler.
Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
It controls the It provides lesser control It reduces the
degree of over degree of degree of
multiprogramming multiprogramming multiprogramming.
It is almost absent or It is also minimal in It is a part of Time
minimal in time sharing time sharing system sharing systems.
system

Syeda Sumaiya Afreen pg.


8
R20 CMR Technical Campus

It selects processes from It can reintroduce the


It selects those processes
pool and loads them into process into memory and
which are ready to execute
memory for execution execution can be
continued.

CONTEXT SWITCH

A context switch is the mechanism to store and restore the state or context of a CPU
in PCB (Process Control block) so that a process execution can be resumed from the
same point at a latertime. Using this technique, a context switcher enables multiple
processes to share a single CPU.Context switching is an essential part of a
multitasking operating system. When the scheduler switches the CPU from one
process to execute another process, the state of the current running process is stored
into the PCB. After this, the state for other process to run next is loaded from itsown
PCB and used to set the PC, registers, etc. At that point, the second process can start
executing.

Syeda Sumaiya Afreen pg.


9
R20 CMR Technical Campus

Fig: Diagram showing CPU switch from process to process

2. OPERATIONS ON THE PROCESS

• Process Creation: Through appropriate system calls, such as fork( ) or spawn(),


processes may create other processes. The process which creates other process is
termed the parent process and the created sub-process is termed its child
process. Each process is given an integer identifier, termed as process identifier,
or PID.Once the process is created, it will be ready and come into the ready
queue(main memory) for the execution.
• Process Scheduling: Out of the many processes present in the ready queue, the
Operating system chooses one process and start executing it. Selecting the
process which is to be executed next, is known as scheduling.
• Process Execution: The CPU starts executing the scheduled process. When a
Syeda Sumaiya Afreen pg.
10
R20 CMR Technical Campus

process enters into blocked or waiting state during the execution, then the
processor starts executing theotherprocesses.
• Process Termination: Once the purpose of the process gets over then the OS
will kill theprocess. The Context of the process (PCB) will be deleted and the
process gets terminated bythe Operating system. The running process can enter
into termination state by using system call exit().The parent process terminates
only after terminating all its child processes. If the parent process terminates
before termination of its child processes, then the child process is called as
zombie process or orphan process. These are orphan processes are killed off.

3. INTERPROCESS COMMUNICATION(IPC)

Independent process: A process is said to be independent when it cannot affect or


be affected by any other processes that are running the system. In other words, any
process that does not share data with other processes is called as independent
process.
Cooperating process: Cooperating processes are those that can affect or are
affected by other processes running on the system. Cooperating processes may share
data with each other processes.

Reasons for cooperating processes

There may be many reasons for the requirement of cooperating processes. Some of
these aregiven as follows:
• Modularity: Modularity involves dividing complicated tasks into smaller
subtasks. These subtasks can complete by different cooperating processes.
This leads to faster andmoreefficient completion of therequired tasks.
• Information Sharing: Sharing of information between multiple processes

Syeda Sumaiya Afreen pg.


11
R20 CMR Technical Campus

can be accomplished using cooperating processes. This may include access to


the same files. A mechanism is required so that the processes can access the
files in parallel to each other.
• Convenience: There are many tasks that a user needs to do such as compiling,
printing, editing etc. It is convenient if the tasks can be managed by
cooperating processes.
• Computation Speedup: Subtasks of a single task can be performed parallel
using cooperating processes. This increases the computation speedup as the
task can be executed faster. However, this is only possible if the system has
multiple processing elements.

Methods of Cooperation (IPC)

Cooperating processes can coordinate and communicate with each other using
shared data or messages. Details about these are given as follows:

• Cooperation by Sharing: The cooperating processes can cooperate with each


other using shared data such as memory, variables, files, database set. Critical
section is used
to provide data integrity and writing is mutually exclusive to prevent
inconsistent data. A diagram that demonstrates cooperation by sharing is
given as follows:

Syeda Sumaiya Afreen pg.


12
R20 CMR Technical Campus

ProcessP1

Shared Data

ProcessP2

Kernel
Fig: Shared memory

In the above diagram, Process P1 and P2 can cooperate with each other using
shared datasuchas memory, variables, files, databasesetc.

• Cooperation by Communication: The cooperating processes can cooperate


with eachother using messages. This may lead to deadlock if each process is
waiting for a messagefrom the other to perform a operation. Starvation is also
possible if a process neverreceives a message. A diagram that demonstrates
cooperation by communication is givenas follows:

ProcessP1 M

ProcessP2 M

Kernel M

Syeda Sumaiya Afreen pg.


13
R20 CMR Technical Campus

Fig: Message passing

In the above diagram, Process P1 and P2 can cooperate with each other using
messages to communicate. Process P1 send message ‘M’ to the kernel. The
kernel shares the message ‘M’ to the processP2.

4. THREADS

Thread is an execution unit which consists of its own program counter, a stack, and
a set of registers. Threads are also known as Lightweight processes. Threads are
popular way to improve application through parallelism. The CPU switches rapidly
back and forth among the threads giving illusion that the threads are running in
parallel.

As each thread has its own independent resource for process execution, multiple
processes can be executed parallel by increasing number of threads.

Syeda Sumaiya Afreen pg.


14
R20 CMR Technical Campus

Types of Thread

There are two types of threads:


1. User Threads

2. Kernel Threads

Syeda Sumaiya Afreen pg.


15
R20 CMR Technical Campus

User threads, are above the kernel and without kernel support. These are the threads
that application programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern OSs
support kernel level threads, allowing the kernel to perform multiple simultaneous
tasks and/or to service multiple kernel system calls simultaneously.

Multithreading Models: The user threads must be mapped to kernel threads, by one
of the following strategies:

• Many to One Model


• One to One Model
• Many to Many Model

ManytoOneModel

In the many to one model, many user-level threads are all mapped onto a single
kernel thread.

Thread management is handled by the thread library in user space, which is

efficient in nature.

One to One Model

The one to one model creates a separate kernel thread to handle each and every
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

user thread.

Most implementations of this model place a limit on how many threads can be
created.
Linux and Windows from 95toXP implement the one-to-one model for threads.

Many to Many Model

• The many to many model multiplexes any number of user threads onto an
equal or smaller number of kernel threads, combining the best features of the
one-to-one and many-to-one models.
• Users can create any number of the threads.
• Blocking the kernel system calls does not block the entire process.
• Processes can be split across multiple processors.

Thread Libraries: Thread libraries provide programmers with API for creation and
management of threads. Thread libraries may be implemented either in user space
or in kernel space. The user space involves API functions implemented solely within
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

the user space, with no kernel support. The kernel space involves system calls, and
requires a kernel with thread library support.

Three types of Thread

1. POSIX Pitheads, may be provided as either a user or kernel library, as an


extension to the POSIX standard.
2. Win32threads, are provided as a kernel-level library on Windows systems.
3. Java threads: Since Java generally runs on a Java Virtual Machine, the
implementation of threads is based upon whatever OS and hardware the JVM
is running on, i.e. either Pitheads or Win32 threads depending on the system.

Benefits of Multithreading
1. Responsiveness

2. Resource sharing, hence allowing better utilization of resources.

3. Economy. Creating and managing threads becomes easier.

4. Scalability. One thread runs on one CPU. In Multithreaded processes,

threads can be distributed over a series of processors to scale.


5. Context Switching is smooth. Context switching refers to the procedure

followed by CPU to change from one task to another.

5. SCHEDULING CRITERIA

• The "best" scheduling algorithm can be decided based on many criteria. They
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

are:

• CPU Utilization: CPU utilization is the amount of time that the processor is busy
over aperiod of time. It is also used to estimate system performance. To make out
the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time(Ideally100% of the time). Considering a real system, CPU usage
should range from 40% (lightly loaded) to 90% (heavily loaded.)
• Throughput: It is the total number of processes completed per unit time or rather
say total amount of work done in a unit of time.
• Turnaround Time: It is the amount of time taken to execute a particular process,
i.e. theinterval from time of submission of the process (arrival time) to the time of
completion of the process (Finish time).
Turnaround Time=Finish Time -Arrival Time
• Waiting Time: The sum of the periods of time spent waiting in the ready queue
waiting forCPU.
Waiting Time=Turnaround Time-Burst Time
• Load Average: It is the average number of processes residing in the ready queue
waiting for their turn to get into the CPU.
• Response Time: Amount of time it takes from when a request was submitted until
the first response is produced. Remember, it is the time till the first response and
not the completion of process execution (final response).

Note: In general CPU utilization and Throughput are maximized and other factors
are reduced for proper optimization.

6. CPU SCHEDULING ALGORITHMS

The CPU scheduling algorithms decides the order of execution of the processes to
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

achieve maximum CPU utilization.

Important Terminologies

• Burst Time/Execution Time: It is a time required by the process to


complete execution. It is also called running time.
• Arrival Time: It is the time at which the process arrives in the ready queue.

• Finish Time: It is the time at which the process completes the execution.

• Turnaround Time: It is the time interval from arrival time of the process to
the finish time of the process.

• Turnaround Time= Finish Time-Arrival Time


• Waiting Time: The sum of the periods of time spent waiting in the ready
queue.

Waiting Time=Turnaround Time-Burst Time

• Average waiting time: It is the average of the waiting times of all the
processes executed over a period of time.
• Average turnaround time: It is the average of the turnaround times of all
the processes executed over a period of time.
• Response time: Amount of time it takes from when a request was
submitted until the first response is produced. Remember, it is the time till
the first response and not the completion of process execution (final
response).

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

The popular CPU scheduling algorithms are:


• First Come First Serve(FCFS) Scheduling
• Shortest-Job-First(SJF) Scheduling
• Priority Scheduling
• Round Robin(RR) Scheduling
• Multilevel Queue Scheduling
• Multilevel Feedback Queue Scheduling

i. First Come First Serve Scheduling: First Come First Served(FCFS) scheduling
algorithm schedules the execution of the processes based on their arrival time. The
process which arrives first, gets executed first, the process arrives second gets
executed next and so on. It is implemented by using the FIFO (First in First out)
queue. In this, a new process enters through the tail of the queue, and the scheduler
selects a process from the head of the queue for execution. A perfect real life
example of FCFS scheduling is buying tickets at ticket counter.
• It is the easiest and most simple CPU scheduling algorithm.
• Average waiting time is very high as compared to other algorithms.
• It Is Non Pre-emptive algorithm.
Advantages of FCFS

1. Simple
2. Easy

Disadvantages of FCFS

1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
3. Although it is easy to implement, but it is poor in performance since the average
waiting time is higher as compare to other scheduling algorithms.
4. It suffers from Convoy Effect.

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

Convoy Effect in FCFS

FCFS may suffer from the convoy effect if the burst time of the first job is the
highest among all. As in the real life, if a convoy is passing through the road then
the other persons may get blocked until it passes completely. This can be simulated
in the Operating System also.

If the CPU gets the processes of the higher burst time at the front end of the ready
queue then the processes of lower burst time may get blocked which means they
may never get the CPU if the job in the execution has a very high burst time. This is
called convoy effect or starvation.

PROBLEMS:
Question 1: Consider the following five processes =(P1,P2,P3,P4,P5) with Arrival
times=(0,0, 2, 5, 8 ) and Burst Time = ( 8, 6, 3, 5, 2 ) respectively. Find average
waiting time and average turnaround time for the above processes using FCFS CPU
scheduling algorithm.

Solution:

The GANTT chart is:


P1 P2 P3 P4 P5
0 8 14 17 22 24
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

Average turnaround time =(8+14+15+17+16)/5


=14.0
Average waiting time = (0+8+12+12+14)/5
=9.2
PROCE ARRIVAL BURST START FINISH TURNAROUND WAITING
SS TIME TIME TIME TIME TIME TIME
P1 0 8 0 8 8 0
P2 0 6 8 14 14 8
P3 2 3 14 17 15 12
P4 5 5 17 22 17 12
P5 8 2 22 24 16 14

Question 2 :Consider the following five processes =(P1,P2,P3,P4,P5) with Arrival


times =(0,1,2,3,4) and Burst Time = ( 2,6,4,9,2 ) respectively. Find average waiting
time and average turnaround time for the above processes using FCFS CPU
scheduling algorithm.

Solution:

The GANTT chart is:


P1 P2 P3 P4 P5
0 2 8 12 21 23

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
PROCE ARRIVAL BURST START FINISH TURNAROUND WAITING
SS TIME TIME TIME TIME TIME TIME
P1 0 2 0 2 2 0
P2 1 6 2 8 7 1
P3 2 4 8 12 10 6
P4 3 9 12 21 18 9
P5 4 12 21 33 29 17

Average turnaround time =(2+7+10+18+29)/5


=13.2
Average waiting time = (0+1+6+9+17)/5
=6.6

ii. Shortest Job First (SJF) Scheduling: Shortest job first (SJF) scheduling
algorithm schedules the execution of the processes based on their burst time. The
process with the lowest burst time among the list of available processes is
executed first, second shortest burst time process executed next and so on. If two
processes have the same bust time then FCFS is used to break the tie. This
scheduling algorithm can be preemptive or non-preemptive.

• This is the best approach to minimize average waiting time.


• To successfully implement it , the burst time of the processes should be
known to the processor in advance, which is practically not possible all the
time.
• May suffer with the problem of starvation.
Advantages of SJF

1. Maximum throughput
2. Minimum average waiting and turnaround time

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

Disadvantages of SJF

1. May suffer with the problem of starvation


2. It is not implementable because the exact Burst time for a process can't be known
in advance.

Question: Consider the following five processes=(P1,P2,P3,P4,P5)with Arrival


times=(0,0, 2, 5, 8 ) and Burst Time = ( 8, 6, 3, 5, 2 ) respectively. Find average
waiting time and average turnaround time for the above processes using SJF CPU
scheduling algorithm.
Solution: The GANTT chart is:
P2 P3 P5 P4 P1
0 6 9 11 16 24
PROCE ARRIVAL BURST START FINISH TURNAROUND WAITING
SS TIME TIME TIME TIME TIME TIME
P1 0 8 16 24 24 16
P2 0 6 0 6 6 0
P3 2 3 6 9 7 4
P4 5 5 11 16 11 6
P5 8 2 9 11 3 1

Average turnaround time =(24+6+9+16+11)/5


=13.2
Average waiting time = (16+0+4+6+1)/5
=5.4

iii. Shortest Remaining Time First(SRTF): Shortest Remaining Time Next (SRN)
is thepreemptive version of Shortest Job First (SJF) algorithm. Initially, it starts
like SJF, but when anew process arrives with burst time shorter than remaining
burst time of currently executing process, then the currently executing process is
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

stopped and newly arrived process executes.


Advantages of SJF

1. Maximum throughput
2. Minimum average waiting and turnaround time

Disadvantages of SJF

1. May suffer with the problem of starvation


2. It is not implementable because the exact Burst time for a process can't be known
in advance.

Question: Consider the following five processes =(P1,P2,P3,P4,P5) with Arrival


times = (0,0, 2, 3, 5 ) and Burst Time = ( 9, 8, 4, 2, 4 ) respectively. Find average
waiting time and average turnaround time for the above processes using preemptive
version of SJF/Shortest Job Next(SJN) / Shortest Remaining Time Next (SRN)CPU
scheduling algorithm.

Solution: The GANTT chart is:


P2 P P4 P3 P5 P2 P1
3

0 2 3 5 8 12 18 27

PROCESS ARRIVAL BURST START FINISH TURNAROUND WAITING


TIME TIME TIME TIME TIME TIME
P1 0 9 18 27 27 18
P2 0 8 0 18 18 10
P3 2 4 2 8 6 2
P4 3 2 3 5 2 0
P5 5 4 8 12 7 3

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

Averageturnaroundtime =(27+18+8+5+12)/5
=14
Average waiting time=(18+10+2+0+3)/5
=6.6

iv. Priority Scheduling: Priority scheduling algorithm schedules the execution of the
processes based on their priorities. A priority value is associated with each process.
The process with the highest priority among the list of available processes is
executed first, second highest priority process executed next and so on. If two
processes have the same priority then FCFS is used to break the tie. This scheduling
algorithm can be preemptive or non-preemptive.

• Non-Preemptive Priority Scheduling: In case of non-preemptive priority


schedulingalgorithm if a new process arrives with a higher priority than the
current running process, the incoming process isput at thehead oftheready
queue, which meansaftertheexecutionofthe current process it will
beprocessed.
• Preemptive Priority Scheduling: If a new process arrives with higher
priority than thecurrently running process, then execution of current process
is stopped and the new process with higher priority gets executed.
Advantages:
Good when the resources are limited and priorities for each process are defined
beforehand
Disadvantages:
1. Difficult to objectively decide which processes are given higher priority.
2. Low priority processes will be lost if the computer crashes.

Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with
Arrival times = ( 0,0, 2, 3, 5 ), Burst Time = ( 9, 8, 4, 2, 4 ) and their Priority
values = (4,3,2,5,1) respectively. Find average waiting time and average
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

turnaround time for the above processes using

i. Priority CPU scheduling algorithm


ii. Preemptive version of Priority CPU scheduling algorithm.

Solution:

i. Priority CPU scheduling algorithm:


The GANTT chart is
P P5 P3 P1 P4
2

0 8 12 16 25 27

PROCESS ARRIVAL BURST PRIORITY START FINISH TURNAROUN WAITING


TIME TIME TIME TIME D TIME
TIME
P1 0 9 4 16 25 25 16
P2 0 8 3 0 8 8 0
P3 2 4 2 12 16 14 10
P4 3 2 5 25 27 24 22
P5 5 4 1 8 12 7 3

Averageturnaroundtime =(25+8+14+24+7)/5
=15.6
Average waiting time =(16+0+10+22+3)/5
=10.2

ii. Preemptive version of Priority CPU scheduling algorithm:

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

The GANTT chart is


P2 P3 P5 P3 P2 P1 P4
0 2 5 9 10 16 25 27

PROCE ARRIV BURS PRIORI STAR FINIS TURNARO WAITI


SS AL T TY T H UND NG
TIME TIME TIME TIME TIME TIME
P1 0 9 4 16 25 25 16
P2 0 8 3 0 16 16 8
P3 2 4 2 2 10 8 4
P4 3 2 5 25 27 24 22
P5 5 4 1 5 9 4 0

Averageturnaroundtime =(25+16+8+24+4)/5
=15.4
Average waiting time= (16+8+4+22+0)/5
=10.0

vi. Round Robin Scheduling: A certain fixed time is defined in the system which
is called time quantum or time slice. Each process present in the ready queue is
executed for thattime quantum, if the execution of the process is completed
during that time then the processwill terminate else the process will go back to
the ready queue and waits for the next roundto complete the remaining execution
part. Context switching is used to save the execution state of the preempted
process. This algorithm avoids starvation problem.

ReadyQueue
CPU Executioncompleted
New process EXIT

Executionnotcompleted Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

• Round Robin is the preemptive process scheduling algorithm.

• Time slice should be minimum, otherwise it becomes like FCFS.

• Round robin is a clock-driven CPU scheduling algorithm.

Advantages

1. It can be actually implementable in the system because it is not depending on the


burst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.

Disadvantages

1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the
system.
3. Deciding a perfect time quantum is really a very difficult task in the system.

Question: Consider the following five processes =( P1, P2,P3,P4,P5) with Arrival
times =( 0,2,3,4,7) and Burst Time=(9,8,4,6, 8 ) respectively. Find average
waiting time and average turnaround time for the above processes using Round
Robin CPU Scheduling algorithm. Use time quantum / time slice=3.

Solution: The GANTT chart is


P P2 P3 P P4 P P5 P3 P1 P4 P2 P
1 1 2 5

0 3 6 9 12 15 18 21 22 25 28 30 35

18

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

PROCESS ARRIVAL BURST START FINISH TURNAROUND WAITING


TIME TIME TIME TIME TIME TIME
P1 0 9 0 25 25 16
P2 2 8 3 30 28 20
P3 3 4 6 22 19 15
P4 4 6 12 28 24 18
P5 7 8 18 35 28 20

Averageturnaroundtime =(25+28+19+24+28)/5
=24.8
Average waiting time =(16+20+15+18+20)/5
=17.8

vii. Multilevel Queue Scheduling: A multi-level queue scheduling algorithm


partitions theready queue into several separate queues. The processes are
classified into different groups. Each group of the processes is assigned to one
queue and each queue has its own schedulingalgorithm. Let us consider an
example of a multilevel queue-scheduling algorithm with fivequeues:
SystemProcesses
InteractiveProcesses
InteractiveEditingProcesses
BatchProcesses
StudentProcesses.

Each queue hasabsolute priority over lower-priority queues. For example, no process
in the batch queue could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty. If an interactive editing
process entered the ready queue while a batch process was running, the batch process
willbe preempted.
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

Advantages:

• The processes are permanently assigned to the queue, so it has advantage of low
scheduling overhead.

Disadvantages:

• Some processes may starve for CPU if some higher priority queues are never
becoming empty.
• It is inflexible in nature.

viii. Multilevel Feedback Queue Scheduling: In a multilevel feedback queue-


schedulingalgorithm, initially all the processes enter into a single queue. Latter,
it allows a process tomove between queues. If a process uses too much CPU time,
it will be moved to a lower-priority queue. Similarly, a process that waits too long
in a lower-priority queue may bemovedto ahigher-priorityqueue. This
formofagingprevents starvation.

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

• Advantages – Low scheduling overhead. Allows aging, thus no starvation.


• Disadvantages – It's not flexible, thus it is also the most complex.

7. MULTIPLE-PROCESSORSCHEDULING

In multiple-processor scheduling multiple CPU’s are available. The OS can use any
available processor to run any process in the queue. Hence multiple processes can
be executed parallel. However multiple processor scheduling is more complex as
compared to single processor scheduling.

Approaches to Multiple-Processor Scheduling:

i. Asymmetric Multiprocessing vs Symmetric Multiprocessing: In Asymmetric

Multiprocessing, all the scheduling decisions and I/O processing are handled by a
single processor called the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data sharing.
In Symmetric Multiprocessing, each processor is self scheduling. All
processes may be in a common ready queue or each processor may maintain its
own private queue for ready processes. The scheduling proceeds further by having

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

the scheduler for each processor examine the ready queue and select a process to
execute.
ii. Processor Affinity: When a process runsona specific processor there are certain

effectson the cache memory. The data most recently accessed by the process
populate the cache forthe processor and as a result successive memory access by
the process is often satisfied in the cache memory. Now if the process migrates to
another processor, the contents of the cache memory must be invalidated for the
first processor and the cache for the second processor must be repopulated.
Because of the high cost of invalidating and repopulating caches, most of the
SMP(symmetric multiprocessing) systems try to avoid migration of processes
from one processor to another and try to keep a process running on the same
processor. This is known as PROCESSOR AFFINITY. There are two types of
processor affinity:
a. Soft Affinity: When an operating system has a policy of attempting to keep

a process running on the same processor but not guaranteeing it will do so,
this situation is calledsoftaffinity.
b. Hard Affinity: Some OS such as Linux provide some system calls to support

Hard Affinity which allows a process to migrate between processors.


iii. Load Balancing: Load Balancing is the phenomena which keeps the workload

evenly Distributed across all processors in an SMP system. On SMP


(symmetric multiprocessing), it is important to keep the workload balanced
among all processors to fully utilize the benefits ofhaving more than one
processor else one or more processor will sit idle while other processors have
high workloads along with lists of processors awaiting the CPU. There are two
general approaches to load balancing:
a. Push Migration – In push migration a task routinely checks the load on
each processor and if it finds an imbalance then it evenly distributes load on
each processors by moving the processes from overloaded to idle or less
busy processors.
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

b. Pull Migration – Pull Migration occurs when an idle processor pulls a


waiting task from a busy processor for its execution.
iv. Multicore Processors: In multicore processors multiple processor cores are

places on thesame physical chip. Each core has a register set to maintain its
architectural state and thus appears to the operating system as a separate physical
processor. SMP systems that use multicore processors are faster and consume less
power than systems in which each processor has its own physical chip.
However multicare processors may complicate the scheduling problems.
When processor accesses memory then it spends a significant amount of time
waiting for the data tobecome available. This situation is called MEMORY
STALL. It occurs for various reasons such as cache miss, which is accessing the
data that is not in the cache memory. In such cases the processor can spend up to
fifty percent of its time waiting for data to become available from the memory. To
solve this problem recent hardware designs have implemented multithreaded
processor cores in which two or more hardware threads are assigned to each core.
Therefore if one thread stalls while waiting for the memory, core can switch to
another thread. There are two ways to multithread a processor :
a. Coarse-Grained Multithreading: In coarse grained multithreading a

thread executes on a process or until a long latency event such as a memory


stall occurs, because of the delay caused by the long latency event, the
processor must switch to another thread to begin execution. The cost of
switching between threads is large.
b. Fine-Grained Multithreading: This multithreading switch between

threads at a much finer level mainly at the boundary of an instruction cycle.


The architectural design of fine grained systems includes logic for thread
switching and as a result the cost of switching between threads is small.
v. Virtualization and Threading: In this type of multiple-processor scheduling

even a single CPU system acts like a multiple-processor system. In a system with
Virtualization, the virtualization presents one or more virtual CPU to each of
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

virtual machines running on the system and then schedules the use of physical
CPU among the virtual machines. Most virtualized environments have one host
operating system and many guest operating systems. The host operating system
creates and manages the virtual machines. Each virtual machine has a guest
operating system installed and applications run within that guest. Each guest
operating system may be assigned for specific use cases, applications or users
including time sharing or even real-time operation. Any guest operating-system
scheduling algorithm that assumes a certain amount of progress in a given amount
of time will be negatively impacted by the virtualization. The net effect of such
scheduling layering is that individual virtualized operating systems receive only a
portion of the available CPU cycles, even though they believe they are receiving
all cycles and that they are scheduling all of those cycles. Commonly, the time-
of-day clocks in virtual machines are incorrect because timers take no longer to
trigger than they would on dedicated CPU’s. Virtualizations can thus undo the
good scheduling-algorithm efforts of the operating systems within virtual
machines.

8. PROCESS MANAGEMENT SYSTEM CALLS - fork, exit,


wait, wait pid, exec

The process management is done with a number of system calls, each with a single
(simple)purpose. These system calls can then be combined to implement more
complex behaviors. The basic process management system calls are: fork( ), exec(),
wait( ),waitpid( ),exit( )etc.

System Call Purpose


fork() It is used to create a new process.
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

exec() It run a new program


wait() It is used by the parent process to know the exit status of the child.
waitpid() It is used by the parent process to know the exit status of a
particular child.
exit() It is used to terminate the running process

• fork( ) : This system call is used to create a new process. A parent process uses
fork tocreate a new child process. The child process is almost-exact duplicate of
the parent process. After fork, both parent and child executes the same program
but in separate processes.
pid_t=fork( );

On successful execution of fork( ), the process ID (PID) of the child is returned


to the parent,and 0 is returned to the child. On failure, -1 is returned to the parent,
no child process is created, and err no is set appropriately.
• exec( ):When a child process doesn’t wants to execute the same program as the
parent, itwill use this system call. It loads a new process image into the current
process space and runsit from the entry point. This is known as an overlay. In this
case, a new process is not created but data, heap, stack etc.of the process are
replaced by the new process.
exec();

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus

• wait( ): This system call is used by the parent process to obtain the exit status of
a child process. When this system call is used, the execution of the parent process
is suspended until its child terminates. The signature of wait( )is:
pid_twait(int*status);

On success,it returns the PID of the terminated child. On failure(no child),it


returns-1.

• Waitpid( ) : When a parent process has more than one child processes, waitpid(
) system call is used by the parent process to know the termination state of a
particular child. The signature of waitpid( )is :
pid_twaitpid(pid_tpid,int*status,intoptions);

• exit( ): The exit( ) system call is used by a program to terminate its execution.
The operating systemre claims resources that we reused by the process after the
exit( )system call.
exit(int);

Orphan process: An Orphan process is a running child process whose parent


process execution completed or terminated.

Zombie process: A zombie process or defunct process is a process that has


completed execution but still has an entry in the process table. This occurs for the
child processes, where the entry is still needed to allow the parent process to read its
child's exit status: once the exit status is read by parent process via the wait( ) system
call, the zombie's entry is removed from the process table and it is said to be
"reaped". A child process always first becomes a zombie before being removed from
the resource table.
**************************** ALL THE BEST *****************************

Ravindar.M,Asso.Prof,CSEDept,JITS-KNR

You might also like