Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Process Scheduling in OS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

UNIT-2

Process Scheduling

What is Process Scheduling?

The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.

The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.

Scheduling fall into one of the two general categories:

 Non Pre-emptive Scheduling: When the currently executing process gives up the CPU
voluntarily.

 Pre-emptive Scheduling: When the operating system decides to favor another process,
pre-empting the currently executing process.

Scheduling Queues:

 All processes, upon entering into the system, are stored in the Job Queue.

 Processes in the Ready state are placed in the Ready Queue.

 Processes waiting for a device to become available are placed in Device Queues. There
are unique device queues available for each I/O device.

A new process is initially put in the Ready queue. It waits in the ready queue until it is selected
for execution(or dispatched). Once the process is assigned to the CPU and is executing, one of
the following several events can occur:

 The process could issue an I/O request, and then be placed in the I/O queue.

 The process could create a new sub process and wait for its termination.

 The process could be removed forcibly from the CPU, as a result of an interrupt, and be
put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state,
and is then put back in the ready queue. A process continues this cycle until it terminates, at
which time it is removed from all queues and has its PCB and resources de allocated.

Types of Schedulers

There are three types of schedulers available:

1. Long Term Scheduler

2. Short Term Scheduler

3. Medium Term Scheduler

Let's discuss about all the different types of Schedulers in detail:

Long Term Scheduler

Long term scheduler runs less frequently. Long Term Schedulers decide which program must get
into the job queue. From the job queue, the Job Processor, selects processes and loads them into
the memory for execution. Primary aim of the Job Scheduler is to maintain a good degree of
Multiprogramming. An optimal degree of Multiprogramming means the average rate of process
creation is equal to the average departure rate of processes from the execution memory.

Short Term Scheduler

This is also known as CPU Scheduler and runs very frequently. The primary aim of this
scheduler is to enhance CPU performance and increase process execution rate.

Medium Term Scheduler

This scheduler removes the processes from memory (and from active contention for the CPU),
and thus reduces the degree of multiprogramming. At some later time, the process can be
reintroduced into memory and its execution van be continued where it left off. This scheme is
called swapping. The process is swapped out, and is later swapped in, by the medium term
scheduler.

Swapping may be necessary to improve the process mix, or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed up. This
complete process is descripted in the below diagram:

What is Context Switch?

1. Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context Switch.

2. The context of a process is represented in the Process Control Block(PCB) of a process;


it includes the value of the CPU registers, the process state and memory-management
information. When a context switch occurs, the Kernel saves the context of the old
process in its PCB and loads the saved context of the new process scheduled to run.

3. Context switch time is pure overhead, because the system does no useful work while
switching. Its speed varies from machine to machine, depending on the memory speed,
the number of registers that must be copied, and the existence of special instructions(such
as a single instruction to load or store all registers). Typical speeds range from 1 to 1000
microseconds.

4. Context Switching has become such a performance bottleneck that programmers are
using new structures(threads) to avoid it whenever and wherever possible.

Different Operations on Processes


There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination. These are given in
detail as follows −
Process Creation
Processes need to be created in the system for different operations. This can be done by the
following events −

 User request for process creation


 System initialization
 Execution of a process creation system call by a running process
 1Batch job initialization
A process may be created by another process using fork(). The creating process is called the
parent process and the created process is the child process. A child process can have only one
parent but a parent process may have many children. Both the parent and child processes have
the same memory image, open files, and environment strings. However, they have distinct
address spaces.
A diagram that demonstrates process creation using fork() is as follows −

Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing currently and
the next process to execute is determined by the short-term scheduler. Preemption makes sure
that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows −
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O
events are executed in the main memory and don't require the processor. After the event is
complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant. The child
process sends its status information to the parent process before it terminates. Also, when a
parent process is terminated, its child processes are terminated as well as the child processes
cannot run if the parent processes are terminated.
Inter Process Communication (IPC)
A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a method of co-operation
between them.

Basics of Inter Process Communication:

 Information sharing: Since some users may be interested in the same piece of information
(for example, a shared file), you must provide a situation for allowing concurrent access to that
information.
 Computation speedup: If you want a particular work to run fast, you must break it into sub-
tasks where each of them will get executed in parallel with the other tasks. Note that such a
speed-up can be attained only when the computer has compound or various processing
elements like CPUs or I/O channels.
 Modularity: You may want to build the system in a modular way by dividing the system
functions into split processes or threads.
 Convenience: Even a single user may work on many tasks at a time. For example, a user may
be editing, formatting, printing, and compiling in parallel.

Working together with multiple processes, require an inter process communication (IPC) method
which will allow them to exchange data along with various information. There are two primary
models of inter process communication:

1. Shared memory and


2. Message passing.
In the shared-memory model, a region of memory which is shared by cooperating processes
gets established. Processes can be then able to exchange information by reading and writing all
the data to the shared region. In the message-passing form, communication takes place by way
of messages exchanged among the cooperating processes.

The two communications models are contrasted in the figure below:

Threads in Operating System

 A thread is a single sequential flow of execution of tasks of a process so it is also known


as thread of execution or thread of control.
 There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process.
 Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks.
 Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can
be viewed as threads. MS Word uses many threads - formatting text from one thread, processing
input from another thread, etc.

Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
The major differences between a process and a thread are given as follows −

Definition
Comparison A process is a program
Process under A thread is a lightweight
Thread process that
Basis execution i.e an active program. can be managed independently by a
scheduler.

Context switching Processes require more time for Threads require less time for context
time context switching as they are more switching as they are lighter than
heavy. processes.

Memory Sharing Processes are totally independent A thread may share some memory with
and don’t share memory. its peer threads.

Communication Communication between processes Communication between threads


requires more time than between requires less time than between
threads. processes .

Blocked If a process gets blocked, remaining If a user level thread gets blocked, all
processes can continue execution. of its peer threads also get blocked.

Resource Processes require more resources Threads generally need less resources
Consumption than threads. than processes.

Dependency Individual processes are independent Threads are parts of a process and so
of each other. are dependent.

Data and Code Processes have independent data and A thread shares the data segment, code
sharing code segments. segment, files etc. with its peer threads.

Treatment by OS All the different processes are All user level peer threads are treated
treated separately by the operating as a single task by the operating
system. system.
Time for creation Processes require more time for Threads require less time for creation.
creation.

Time for Processes require more time for Threads require less time for
termination termination. termination.

Types of Threads

In the operating system, there are two types of threads.

1. Kernel level thread.


2. User-level thread.

User-level thread

 The operating system does not recognize the user-level thread.


 User threads can be easily implemented and it is implemented by the user.
 If a user performs a user-level thread blocking operation, the whole process is blocked.
 The kernel level thread does not know nothing about the user level thread.
 The kernel-level thread manages user-level threads as if they are single-threaded
processes?

examples: Java thread ,POSIX threads, etc.

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread

 The kernel thread recognizes the operating system.


 There is a thread control block and process control block in the system for each thread
and process in the kernel-level thread.
 The kernel-level thread is implemented by the operating system.
 The kernel knows about all the threads and manages them.
 The kernel-level thread offers a system call to create and manage the threads from user-
space.
 The implementation of kernel threads is more difficult than the user thread.
 Context switch time is longer in the kernel thread.
 If a kernel thread performs a blocking operation, the Banky thread execution can
continue. Example: Window Solaris.
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.


2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

Components of Threads
Any thread has the following components.

1. Program counter
2. Register set
3. Stack space

Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and
each thread is treated as a job, the number of jobs done in the unit time increases. That is
why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread
in one process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.

Multithreading Model:

Multithreading allows the application to divide its task into individual threads. In multi-threads,
the same process or task can be done by the number of threads, or we can say that there is more
than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.
The main drawback of single threading systems is that only one task can be performed at a time,
so to overcome the drawback of this single threading, there is multithreading that allows multiple
tasks to be performed.

There exists three established multithreading models classifying these relationships are:

o Many to one multithreading model


o One to one multithreading model
o Many to Many multithreading models

Many to one multithreading model:

 The many to one model maps many user levels threads to one kernel thread.
 This type of relationship facilitates an effective context-switching environment, easily
implemented even on the simple kernel with no thread support.

The disadvantage of this model is that since there is only one kernel-level thread schedule at any
given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi-processor systems. In this, all the thread management is done in
the user space. If blocking comes, this model blocks the whole system.
In the above figure, the many to one model associates all user-level threads to single kernel-level
threads.

One to one multithreading model

 The one-to-one model maps a single user-level thread to a single kernel-level thread.
 This type of relationship facilitates the running of multiple threads in parallel.
 However, this benefit comes with its drawback.
 The generation of every new user thread must include creating a corresponding kernel
thread causing an overhead, which can hinder the performance of the parent process.
 Windows series and Linux operating systems try to tackle this problem by limiting the
growth of the thread count.
In the above figure, one model associates that one user-level thread to a single kernel-level
thread.

Many to Many Model multithreading model

 In this type of model, there are several user-level threads and several kernel-level threads.
 The number of kernel threads created depends upon a particular application.
 The developer can create as many threads at both levels but may not be the same.
 The many to many model is a compromise between the other two models.
 In this model, if any thread makes a blocking system call, the kernel can schedule another
thread for execution.
 Also, with the introduction of multiple threads, complexity is not present as in the
previous models.
 Though this model allows the creation of multiple kernel threads, true concurrency
cannot be achieved by this model.
 This is because the kernel can schedule only one process at a time.
Many to many versions of the multithreading model associate several user-level threads to the
same or much less variety of kernel-level threads in the above figure.

Threading Issues in OS

There are several threading issues when we are in a multithreading environment. In this section,
we will discuss the threading issues with system calls, cancellation of thread, signal handling,
thread pool and thread-specific data.

Threading Issues in OS

1. System Calls
2. Thread Cancellation
3. Signal Handling
4. Thread Pool
5. Thread Specific Data

The fork() and exec() System Calls

 The fork() and exec() are the system calls.


 The fork() call creates a duplicate process of the process that invokes fork(). The new
duplicate process is called child process and process invoking the fork() is called the
parent process.

 Both the parent process and the child process continue their execution from the
instruction that is just after the fork().

Either the fork() can duplicate all the threads of the parent process in the child process or the
fork() would only duplicate that thread from parent process that has invoked it.

Which version of fork() must be used totally depends upon the application.

 Next system call i.e. exec() system call when invoked replaces the program along with all
its threads with the program that is specified in the parameter to exec().

 Typically the exec() system call is lined up after the fork() system call.

Here the issue is if the exec() system call is lined up just after the fork() system call then
duplicating all the threads of parent process in the child process by fork() is useless. As the
exec() system call will replace the entire process with the process provided to exec() in the
parameter.

In such case, the version of fork() that duplicates only the thread that invoked the fork() would be
appropriate.

Thread cancellation

 Termination of the thread in the middle of its execution it is termed as ‘thread


cancellation’.

 Let us understand this with the help of an example. Consider that there is a multithreaded
program which has let its multiple threads to search through a database for some
information.

 However, if one of the thread returns with the desired result the remaining threads will be
cancelled.

Now a thread which we want to cancel is termed as target thread. Thread cancellation can be
performed in two ways:

Asynchronous Cancellation: In asynchronous cancellation, a thread is employed to terminate


the target thread instantly.

Deferred Cancellation: In deferred cancellation, the target thread is scheduled to check itself at
regular interval whether it can terminate itself or not.
Signal Handling

 Signal handling is more convenient in the single-threaded program as the signal would be
directly forwarded to the process. But when it comes to multithreaded program, the issue
arrives to which thread of the program the signal should be delivered.

 Well, how the signal would be delivered to the thread would be decided, depending upon
the type of generated signal. The generated signal can be classified into two type’s
synchronous signal and asynchronous signal.

 Synchronous signals are forwarded to the same process that leads to the generation of the
signal.
 Asynchronous signals are generated by the event external to the running process thus the
running process receives the signals asynchronously.

Thread Pool

 When a user requests for a webpage to the server, the server creates a separate thread to
service the request.

 Although the server also has some potential issues.

 Consider if we do not have a bound on the number of actives thread in a system and
would create a new thread for every new request then it would finally result in exhaustion
of system resources.

 We are also concerned about the time it will take to create a new thread.

 It must not be that case that the time require to create a new thread is more than the time
required by the thread to service the request and then getting discarded as it would result
in wastage of CPU time.

 The solution to this issue is the thread pool. The idea is to create a finite amount of
threads when the process starts. This collection of threads is referred to as the thread
pool. The threads stay in the thread pool and wait till they are assigned any request to be
serviced.

 Whenever the request arrives at the server, it invokes a thread from the pool and assigns
it the request to be serviced.

 The thread completes its service and return back to the pool and wait for the next request.

 If the server receives a request and it does not find any thread in the thread pool it waits
for some or the other thread to become free and return to the pool.
 This much better than creating a new thread each time a request arrives and convenient
for the system that cannot handle a large number of concurrent threads.

Thread Specific data

We all are aware of the fact that the threads belonging to the same process share the data of that
process. Here the issue is what if each particular thread of the process needs its own copy of data.
So the specific data associated with the specific thread is referred to as thread-specific data.

CPU Scheduling in Operating System:

CPU-I/O Burst Cycle

Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate
between these two states. Process execution begins with a CPU burst. That is followed by an I/O
burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last CPU burst
will end with a system request to terminate execution, rather than with another I/O burst. load
store add store CPU burst read from file I/O burst store increment index CPU burst write to file
I/O burst load store add store CPU burst read from file I/O burst Alternating sequence of CPU
and I/O burst.
 CPU scheduling is a process that allows one process to use the CPU while the execution
of another process is on hold(in waiting state) due to unavailability of any resource like
I/O etc, thereby making full use of CPU.
 The aim of CPU scheduling is to make the system efficient, fast, and fair.
 Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed. The selection process is carried out by the short-term
scheduler (or CPU scheduler).
 The scheduler selects from among the processes in memory that are ready to execute and
allocates the CPU to one of them.

CPU Scheduling: Dispatcher

 Another component involved in the CPU scheduling function is the Dispatcher.


 The dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler.

This function involves:

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where it
left last time.

Types of CPU Scheduling

CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state(for I/O request or
invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example, when an
interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates.

CPU Scheduling: Scheduling Criteria

There are many different criteria to check when considering the "best" scheduling algorithm,
they are:

CPU Utilization

To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be
working most of the time(Ideally 100% of the time). Considering a real system, CPU usage
should range from 40% (lightly loaded) to 90% (heavily loaded.)

Throughput
It is the total number of processes completed per unit of time or rather says the total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.

Turnaround Time

It is the amount of time taken to execute a particular process, i.e. The interval from the time of
submission of the process to the time of completion of the process(Wall clock time).

Waiting Time

The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.

Load Average

It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.

Response Time

Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).

In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.

Scheduling Algorithms:

There are 6 basic scheduling algorithms totally:

1. FCFS(First Come First Serve) Scheduling

2. SJF(Shortest Job First) Scheduling

3. Round Robin Scheduling

4. Priority Scheduling

5. Multilevel Queue Scheduling


6. Multilevel Feedback Queue Scheduling

First Come First Serve Scheduling

 In the "First come first serve" scheduling algorithm, as the name suggests,
the process which arrives first, gets executed first, or we can say that the process which
requests the CPU first, gets the CPU allocated first. First Come First Serve, is just
like FIFO(First in First out) Queue data structure, where the data element which is added
to the queue first, is the one who leaves the queue first.

 This is used in Batch Systems.

 It's easy to understand and implement programmatically, using a Queue data structure,
where a new process enters through the tail of the queue, and the scheduler selects
process from the head of the queue.

 A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

The
average waiting time will be 18.75 ms

For the above given proccesses, first P1 will be provided with the CPU resources,

 Hence, waiting time for P1 will be 0

 P1 requires 21 ms for completion, hence waiting time for P2 will be 21 ms


 Similarly, waiting time for process P3 will be execution time of P1 + execution time
for P2, which will be (21 + 3) ms = 24 ms.

 For process P4 it will be the sum of execution times of P1, P2 and P3.

The GANTT chart above perfectly represents the waiting time for each process.

Problems with FCFS Scheduling

1. It is Non Pre-emptive algorithm, which means the process priority doesn't matter.

2. Not optimal Average Waiting Time.

3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence
poor resource(CPU, I/O etc) utilization.

Shortest Job First(SJF) Scheduling

Shortest Job First scheduling works on the process with the shortest burst time or duration first.

 This is the best approach to minimize waiting time

 It is of two types:

o Non Pre-emptive

o Pre-emptive

Non Pre-emptive Shortest Job First

Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.
As you can see in the GANTT chart above, the process P4 will be picked up first as it has the
shortest burst time, then P2, followed by P3 and at last P1.

Problem with Non Pre-emptive SJF

If the arrival time for processes are different, which means all the processes are not available in
the ready queue at time 0, and some jobs arrive after some time, in such situation, sometimes
process with short burst time have to wait for the current process's execution to finish, because in
Non Pre-emptive SJF, on arrival of a process with short duration, the existing job/process's
execution is not halted/stopped to execute the short job first.

This leads to the problem of Starvation, where a shorter process has to wait for a long time
until the current longer process gets executed. This happens if shorter jobs keep coming,
but this can be solved using the concept of aging.

Pre-emptive Shortest Job First

In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as a
process with short burst time arrives, the existing process is preempted or removed from
execution, and the shorter job is executed first.
Example of Preemptive SJF Scheduling: In the following example, we have 4 processes with
process ID P1, P2, P3, and P4. The arrival time and burst time of the processes are given in the
following table.

Process Burst time Arrival time Completion time Turnaround time Waiting time

P1 18 0 31 31 13

P2 4 1 5 4 0

P3 7 2 14 12 5

P4 2 3 7 4 2

The waiting time and turnaround time are calculated with the help of the following formula.
Waiting Time = Turnaround time – Burst Time
Turnaround Time = Completion time – Arrival time
Process waiting time:
P1=31-18=13
P2=4-4=0
P3=12-7=5
P4=4-2=2
Average waiting time= 13+0+5+2/4
=20
Process Turnaround Time:
P1=31-0=31
P2=5-1=4
P3=14-2=12
P4=7-3=4
Average turnaround time= 31+4+12+4/4
=12.75
The GANTT chart of preemptive shortest job first scheduling is:
Round Robin Scheduling

Round Robin(RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.

 A fixed time is allotted to each process, called a quantum, for execution.


 Once a process is executed for the given time period that process is preempted and
another process executes for the given time period.
 Context switching is used to save states of preempted processes.
 This algorithm is simple and easy to implement and the most important is thing is this
algorithm is starvation-free as all processes get a fair share of CPU.
 It is important to note here that the length of time quantum is generally from 10 to 100
milliseconds in length.

Some important characteristics of the Round Robin(RR) Algorithm are as follows:

1. Round Robin Scheduling algorithm resides under the category of Preemptive Algorithms.
2. This algorithm is one of the oldest, easiest, and fairest algorithm.
3. This Algorithm is a real-time algorithm because it responds to the event within a specific
time limit.
4. In this algorithm, the time slice should be the minimum that is assigned to a specific task
that needs to be processed. Though it may vary for different operating systems.
5. This is a hybrid model and is clock-driven in nature.
6. This is a widely used scheduling method in the traditional operating system.

Round Robin(RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.

 A fixed time is allotted to each process, called a quantum, for execution.


 Once a process is executed for the given time period that process is preempted and
another process executes for the given time period.
 Context switching is used to save states of preempted processes.
 This algorithm is simple and easy to implement and the most important is thing is this
algorithm is starvation-free as all processes get a fair share of CPU.
 It is important to note here that the length of time quantum is generally from 10 to 100
milliseconds in length.
.

Let us now cover an example for the same:

In the above diagram, arrival time is not mentioned so it is taken as 0 for all
processes.

Note: If arrival time is not given for any problem statement then it is taken as 0 for all
processes; if it is given then the problem can be solved accordingly.

Explanation

The value of time quantum in the above example is 5.Let us now calculate the Turn around time
and waiting time for the above example :

Turn Around Time Waiting Time


Burst
Processes
Time Turn Around Time = Completion Time – Waiting Time = Turn Around Time
Arrival Time – Burst Time
P1 21 32-0=32 32-21=11

P2 3 8-0=8 8-3=5

P3 6 21-0=21 21-6=15

P4 2 15-0=15 15-2=13

Average waiting time is calculated by adding the waiting time of all processes and then dividing
them by no.of processes.

average waiting time = waiting time of all processes/ no.of processes

average waiting time=11+5+15+13/4 = 44/4= 11ms

Priority CPU Scheduling

In case of priority scheduling the priority is not always set as the inverse of the CPU burst time,
rather it can be internally or externally set, but yes the scheduling is done on the basis of priority
of the process where the process which is most urgent is processed first, followed by the ones
with lesser priority in order.

Processes with same priority are executed in FCFS manner.

The priority of process, when internally defined, can be decided based on memory
requirements, time limits ,number of open files, ratio of I/O burst to CPU burst etc.

Types of Priority Scheduling Algorithm

Priority scheduling can be of two types:


1. Preemptive Priority Scheduling: If the new process arrived at the ready queue has a
higher priority than the currently running process, the CPU is preempted, which means
the processing of the current process is stoped and the incoming new process with higher
priority gets the CPU for its execution.

2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling


algorithm if a new process arrives with a higher priority than the current running process,
the incoming process is put at the head of the ready queue, which means after the
execution of the current process it will be processed.
Example of Priority Scheduling Algorithm

Consider the below table fo processes with their respective CPU burst times and the priorities.

As you can see in the GANTT chart that the processes are given CPU time just on the basis of
the priorities.

Preemptive Priority Scheduling:

Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time Priority

P1 0 4 2

P2 1 3 3
P3 2 1 4

P4 3 5 5

P5 4 2 5

If the CPU scheduling policy is priority preemptive, calculate the average waiting time and
average turn around time. (Higher number represents higher priority)
Solution-

Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 15 15 – 0 = 15 15 – 4 = 11

P2 12 12 – 1 = 11 11 – 3 = 8

P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0

P5 10 10 – 4 = 6 6–2=4

 Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit


 Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit

Problem with Priority Scheduling Algorithm

In priority scheduling algorithm, the chances of indefinite blocking or starvation.

A process is considered blocked when it is ready to run but has to wait for the CPU as some
other process is running currently.

But in case of priority scheduling if new higher priority processes keeps coming in the ready
queue then the processes waiting in the ready queue with lower priority may have to wait for
long durations before getting the CPU for execution.

Using Aging Technique with Priority Scheduling

To prevent starvation of any process, we can use the concept of aging where we keep on
increasing the priority of low-priority process based on the its waiting time.

Multilevel Queue Scheduling Algorithm

A multi-level queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type. Each queue has its own
scheduling algorithm.
The Description of the processes in the above diagram is as follows:

 System Process The Operating system itself has its own process to run and is termed as
System Process.
 Interactive Process The Interactive Process is a process in which there should be the
same kind of interaction (basically an online game ).
 Batch Processes Batch processing is basically a technique in the Operating system that
collects the programs and data together in the form of the batch before
the processing starts.
 Student Process The system process always gets the highest priority while the student
processes always get the lowest priority.

Multilevel Feedback Queue Scheduling

Multilevel feedback queue scheduling, however, allows a process to move between queues.
The idea is to separate processes with different CPU-burst characteristics. If a process uses
too much CPU time, it will be moved to a lower-priority queue. Similarly, a process that
waits too long in a lower-priority queue may be moved to a higher-priority queue. This form
of aging prevents starvation.

In general, a multilevel feedback queue scheduler is defined by the following parameters:

 The number of queues.


 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to a higher-priority queue.
 The method used to determine when to demote a process to a lower-priority queue.
 The method used to determine which queue a process will enter when that process needs
service.

Advantages of MFQS

 This is a flexible Scheduling Algorithm


 This scheduling algorithm allows different processes to move between different queues.
 In this algorithm, A process that waits too long in a lower priority queue may be moved
to a higher priority queue which helps in preventing starvation.

Disadvantages of MFQS

 This algorithm is too complex.


 As processes are moving around different queues which leads to the production of more
CPU overheads.
 In order to select the best scheduler this algorithm requires some other means to select the
values

You might also like