OS-Module2.2-Process Scheduling
OS-Module2.2-Process Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
To introduce CPU scheduling, which is the basis for
multiprogrammed operating systems.
Objective of multiprogramming:
To have some process running at all times, in order to
maximize CPU utilization.
Basic Concepts
Scheduling of processes/work is done to finish the work on time.
Whenever the CPU becomes idle, the operating system must select one of
the processes in the line ready for launch.
CPU Scheduling is a process that allows one process to use the CPU while
another process is delayed (in standby) due to unavailability of any
resources such as I / O etc, thus making full use of the CPU.
Why do we need to schedule processes?
•This task is handled by the Operating System (OS) of the computer and there
are many different ways in which we can choose to configure programs.
•Process Scheduling allows the OS to allocate CPU time for each process.
During resource allocation, the process switches from running state to ready
state or from waiting state to ready state.
This switching occurs as the CPU may give priority to other processes and
replace the process with higher priority with the running process.
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible.
Theoretically, CPU usage can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the system load.
Waiting time – amount of time a process has been waiting in the ready
queue.
• There should be a minimum waiting time and the process should not
starve in the ready queue.
• Arrival Time: Time at which the process arrives in the ready queue.
• Turn Around Time: Time Difference between completion time and arrival time.
• Waiting Time(W.T): Time Difference between turn around time and burst time.
First come first serve scheduling algorithm states that the process that requests
the CPU first is allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
• This algorithm is not much efficient in performance, and the wait time is quite
high.
First-Come, First-Served (FCFS) Scheduling
Advantages of FCFS:
• Easy to implement.
Disadvantages of FCFS:
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.
First-Come, First-Served (FCFS) Scheduling
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
P1 0 4
P2 1 3
P3 2 1
P4 3 2
P5 4 5
Shortest Job First (SJF) Scheduling
Shortest Job First (SJF) is an algorithm in which the process having the
smallest execution time is chosen for the next execution.
It significantly reduces the average waiting time for other processes awaiting
execution.
Characteristics of SJF Scheduling
•This algorithm method is helpful for batch-type processing, where waiting for
jobs to complete is not critical.
•It can improve process throughput by making sure that shorter jobs are
executed first, hence possibly have a short turnaround time.
• Non-Preemptive SJF
• Preemptive SJF
Non-Preemptive SJF
Consider the following five processes each having its own unique burst time and
arrival time.
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Step 0) At time=0, P4 arrives and starts execution.
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1
is compared. Process P1 is executed because its burst time is less compared to P3.
Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will
continue execution.
Step 11) Let’s calculate the average waiting time for above example.
Wait time
Average Waiting Time=
P4= 0-0=0
P1= 3-2=1 0+1+4+7+14/5 = 26/5 = 5.2
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Preemptive SJF
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come.
If a process with even a shorter burst time arrives, the current process is
removed or preempted from execution, and the shorter job is allocated CPU
cycle.
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1
is compared. Process P1 is executed because its burst time is less compared to P3.
Step 4) At time = 4, process P5 will arrive. The burst time of P3, P5, and P1 is
compared. Process P5 is executed because its burst time is lowest. Process P1
is preempted.
Step 7) At time =7, P2 finishes its execution. The burst time of P1, P3, and P5 is
compared. Process P5 is executed because its burst time is lesser.
Process Queue Burst time Arrival time
5 out of 6 is
P1 2
remaining
P2 2 5
P3 8 1
P4 3 0
3 out of 4 is
P5 4
remaining
Step 8) At time =10, P5 will finish its execution. The burst time of P1 and P3 is
compared. Process P1 is executed because its burst time is less.
Step 9) At time =15, P1 finishes its execution. P3 is the only process left. It will
start execution.
• It reduces the average waiting time over FIFO (First in First Out) algorithm.
• SJF method gives the lowest average waiting time for a specific set of
processes.
• It is appropriate for the jobs running in batch, where run times are known in
advance.
•For the batch system of long-term scheduling, a burst time estimate can be
obtained from the job description.
•For Short-Term Scheduling, we need to predict the value of the next burst time.
• SJF can’t be implemented for CPU scheduling for the short term. It is because
there is no specific method to predict the length of the upcoming CPU burst.
• It leads to the starvation that does not reduce average turnaround time.
• Elapsed time should be recorded, that results in more overhead on the processor.
we have 4 processes with process Id P0, P1, P2, and P3. The arrival time and
burst time of the processes are given in the following table.
P0 8 5
P1 5 0
P2 9 4
P3 2 1
Non-Preemptive SJF Scheduling:
P0 8 5 21 8 16
P1 5 0 5 0 5
P2 9 4 16 3 12
P3 2 1 7 4 6
The waiting time and turnaround time are calculated with the help of the
following formula.
we have 4 processes with process ID P1, P2, P3, and P4. The arrival time and
burst time of the processes are given in the following table.
P1 18 0
P2 4 1
P3 7 2
P4 2 3
Preemptive SJF Scheduling
we have 4 processes with process ID P1, P2, P3, and P4. The arrival time and
burst time of the processes are given in the following table.
P1 18 0 31 31 13
P2 4 1 5 4 0
P3 7 2 14 12 5
P4 2 3 7 4 2
In this algorithm, the scheduler selects the tasks to work as per the priority.
The processes with higher priority should be carried out first, whereas jobs with
equal priorities are carried out on a round-robin or FCFS basis.
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities.
Sometimes it is important to run a task with a higher priority before another lower
priority task, even if the lower priority task is still running. The lower priority task
holds for some time and resumes when the higher priority task finishes its
execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific
process. The process that keeps the CPU busy, will release the CPU either by
switching context or terminating. It is the only method that can be used for
various hardware platforms. That’s because it doesn’t need special hardware (for
example, a timer) like preemptive scheduling.
Characteristics of Priority Scheduling
•If two jobs having the same priority are READY, it works on a FIRST COME,
FIRST SERVED basis.
•In priority scheduling, a number is assigned to each process that indicates its
priority level.
•In this type of scheduling algorithm, if a newer process arrives, that is having a
higher priority than the currently running process, then the currently running
process is preempted.
Example of Priority Scheduling
Consider following five processes P1 to P5. Each process has its unique
priority, burst time, and arrival time.
P1 1 4 0
P2 2 3 0
P3 1 7 6
P4 3 4 11
P5 2 2 12
Step 0) At time=0, Process P1 and P2 arrive. P1 has higher priority than P2.
The execution begins with process P1, which has burst time 4.
Step 3) At time 3, no new process arrives so you can continue with P1. P2
process still in the waiting queue.
Step 4) At time 4, P1 has finished its execution. P2 starts execution.
Step 10) At time interval 10, no new process comes, so we continue with
P3
Step 11) At time=11, P4 arrives with priority 4. P3 has higher priority, so it
continues its execution.
Burst Arrival
Process Priority
time time
P1 1 4 0
1 out of 3
P2 2 0
pending
2 out of 7
P3 1 6
pending
P4 3 4 11
P5 2 2 12
Step 12) At time=12, P5 arrives. P3 has higher priority, so it continues
execution.
Step 17) At time =20, P5 has completed execution and no process is left.
Step 18) Let’s calculate the average waiting time for the above example.
Waiting Time = start time – arrival time + wait time for next burst
P1 = o - o = o
P2 =4 - o + 7 =11
P3= 6-6=0
P4= 16-11=5
• Processes are executed on the basis of priority so high priority does not need to
wait for long which saves time.
• This method provides a good mechanism where the relative important of each
process may be precisely defined.
• If high priority processes take lots of CPU time, then the lower priority processes
may starve and will be postponed for an indefinite time.
• scheduling algorithm may leave some low priority processes waiting indefinitely.
• If a new higher priority process keeps on coming in the ready queue, then the
process which is in the waiting state may need to wait for a long duration of
time.
Round Robin Scheduling
• The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns.
• In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice.
• The CPU is shifted to the next process after fixed interval time, which is called
time quantum/time slice.
• Time slice should be minimum, which is assigned for a specific task that needs
to be processed. However, it may differ OS to OS.
• It is a real time algorithm which responds to the event within a specific time
limit.
According to the algorithm, we have to maintain the ready queue and the Gantt
chart.
The structure of both the data structures will be changed after every scheduling.
Ready Queue:
Initially, at time 0, process P1 arrives which will be scheduled for the time slice 4
units. Hence in the ready queue, there will be only one process P1 at starting with
CPU burst time 5 units.
P1
5
GANTT chart
The P1 will be executed for 4 units first.
Ready Queue
Meanwhile the execution of P1, four more processes P2, P3, P4 and P5 arrives
in the ready queue.
P1 has not completed yet, it needs another 1 unit of time hence it will also be
added back to the ready queue.
P2 P3 P4 P5 P1
6 3 1 5 1
GANTT chart
After P1, P2 will be executed for 4 units of time which is shown in the Gantt
chart.
Ready Queue
During the execution of P2, one more process P6 is arrived in the ready queue.
Since P2 has not completed yet hence, P2 will also be added back to the ready
queue with the remaining burst time 2 units.
P3 P4 P5 P1 P6 P2
3 1 5 1 4 2
GANTT chart
After P1 and P2, P3 will get executed for 3 units of time since its CPU burst time
is only 3 seconds.
Ready Queue
Since P3 has been completed, hence it will be terminated and not be added to
the ready queue. The next process will be executed is P4.
P4 P5 P1 P6 P2
1 5 1 4 2
GANTT chart
After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit which is
lesser then the time quantum hence it will be completed.
Ready Queue
The next process in the ready queue is P5 with 5 units of burst time. Since P4 is
completed hence it will not be added back to the queue.
P5 P1 P6 P2
5 1 4 2
GANTT chart
P5 will be executed for the whole time slice because it requires 5 units of burst
time which is higher than the time slice.
Ready Queue
P5 has not been completed yet; it will be added back to the queue with the
remaining burst time of 1 unit.
P1 P6 P2 P5
1 4 2 1
GANTT Chart
The process P1 will be given the next turn to complete its execution. Since it only
requires 1 unit of burst time hence it will be completed.
Ready Queue
P1 is completed and will not be added back to the ready queue. The next process
P6 requires only 4 units of burst time and it will be executed next.
P6 P2 P5
4 2 1
GANTT chart
P6 will be executed for 4 units of time till completion.
Ready Queue
Since P6 is completed, hence it will not be added again to the queue. There are
only two processes present in the ready queue. The Next process P2 requires only
2 units of time.
P2 P5
2 1
GANTT Chart
P2 will get executed again, since it only requires only 2 units of time hence this will
be complete
Ready Queue
Now, the only available process in the queue is P5 which requires 1 unit of burst
time. Since the time slice is of 4 units hence it will be completed in the next burst.
P5
1
GANTT chart
1 0 5 17 17 12
2 1 6 23 22 16
3 2 3 11 9 6
4 3 1 12 9 8
5 4 5 24 20 15
6 6 4 21 15 11
P1 0 8
P2 1 5
P3 2 10
P4 3 11
we have 4 processes with process ID P1, P2, P3, and P4. The arrival time and
burst time of the proceses are given in the following table. (The Quantum
time is 6).
P1 0 8 25 25
17
P2 1 5 11 10
5
P3 2 10 29 27
17
3
P4 11 34 31
20
Process Turnaround Time
P1= 25-0 =0
P2= 11-1=10
P3= 29-2 = 27
P4= 34-3=31 Average Turnaround Time= 0+10+27+31/4
= 68/4
=17
Process Waiting Time
P1= 25-8=17
P2=10-5=5
P3=27-10=17
P4=31-11=20 Average Waiting Time=17+5+17+20/4
=59/4
=14.75
Multilevel Queue Scheduling
Each algorithm supports a different process, but in a general system, some
processes require scheduling using a priority algorithm.
• There are two sorts of processes that require different scheduling algorithms
because they have varying response times and resource requirements.
• These processes are assigned to one queue based on their priority, such as
memory size, process priority, or type.
• Some queues are utilized for the foreground process, while others are used
for the background process.
Advantages
• You can use multilevel queue scheduling to apply different scheduling methods
to distinct processes.
Disadvantages
• It is rigid in nature.
Example
2.Interactive processes:
It is a process in which the same type of interaction should occur.
4.Batch processes:
Batch processing is an operating system feature that collects programs and
data into a batch before processing starts.
5.Student processes:
The system process is always given the highest priority, whereas the student
processes are always given the lowest.
Every queue would have an absolute priority over the low-priority queues. No
process may execute until the high-priority queues are empty.
In the above instance, no other process may execute until and unless the
queues for system, interactive, and editing processes are empty.
Consider the four processes listed in the table below under multilevel queue
scheduling. The queue number denotes the process's queue.
P1 0 4 1
P2 0 3 1
P3 0 8 2
P4 10 5 4
1.Both queues have been processed at the start. Therefore, queue 1 (P1, P2) runs
first (due to greater priority) in a round-robin way and finishes after 7 units.
2.The process in queue 2 (Process P3) starts running (since there is no process
in queue 1), but while it is executing, P4 enters queue 1 and interrupts P3, and
then P3 takes the CPU and finishes its execution.
Multilevel Feedback Scheduling
Each algorithm supports a different process, but some processes require
scheduling using a priority algorithm in a general system.
This strategy prioritizes operations that require I/O and are interactive.
If a process consumes too much processor time, it will be switched to the lowest
priority queue.
A process waiting in a lower priority queue for too long may be shifted to a higher
priority queue. This type of aging prevents starvation.
Multilevel Feedback Scheduling
The method for determining which processes will enter the queue and when those
processes will require service
Multilevel Feedback Scheduling
Example of MFQS:
Taking three queues: Q1, Q2, and Q3.
If a process arrives for Q2, it will preempt the process in Q3. Similarly, if a process
that arrives for Q3 will preempt the current process in Q3.
• It prevents starvation by moving a process that waits too long for the lower
priority queue to the higher priority queue.
Disadvantages
• For the selection of the best scheduler, it requires some other means to select
the values.
• Advantage: This is simple because only one processor accesses the system
data structures, reducing the need for data sharing.
Symmetric Multiprocessing
• Each processor is self-scheduling.
Processor Affinity
• In SMP systems,
1. Migration of processes from one processor to another are avoided and
2. Instead processes are kept running on same processor. This is known as
processor affinity.
• Two forms:
Soft Affinity
When an OS try to keep a process on one processor because of policy,
but cannot guarantee it will happen.
Hard Affinity
When an OS have the ability to allow a process to specify that it is not to
migrate to other processors. Eg: Solaris OS
Multiple Processor Scheduling
Load Balancing
This attempts to keep the workload evenly distributed across all
processors in an SMP system.
•Two approaches:
1. Push Migration
A specific task periodically checks the load on each processor and if it finds an
imbalance, it evenly distributes the load to idle processors.
2. Pull Migration
An idle processor pulls a waiting task from a busy processor.
Symmetric Multithreading
1. Create multiple logical processors on the same physical processor.