Term Paper
Term Paper
CPU SCHEDULING
Submitted From:
M.Hussnain Shabbir
173328
Submitted To:
Ma’am Huma
Introduction:
CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive. We are discussing the CPU-
scheduling concepts and present several CPU-scheduling algorithms. We also consider the problem of
selecting an algorithm for a particular system. We studied the threads to the process model. On operating
systems that support them, it is kernel-level threads—not processes—that are in fact being scheduled by
the operating system. However, the terms "process scheduling" and "thread scheduling" are often used
interchangeably. Now, we use process scheduling when discussing general scheduling concepts and thread
scheduling to refer to thread-specific ideas.
CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the execution of another
process is on hold (in waiting state) due to unavailability of any resource like I/O etc., thereby making full
use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready
queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler).
The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU
to one of them.
Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary to run a
certain task that has a higher priority before another task although it is running. Therefore, the running task
is interrupted for some time and resumed later when the priority task has finished its execution.
Scheduling Criteria:
There are many different criteria to check when considering the "best" scheduling algorithm, they are:
CPU Utilization:
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the time
(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly loaded)
to 90% (heavily loaded.)
Throughput:
It is the total number of processes completed per unit time or rather say total amount of work done in a unit
of time. This may range from 10/second to 1/hour depending on the specific processes.
Turnaround Time:
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of
the process to the time of completion of the process (Wall clock time).
Waiting Time:
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the
ready queue to acquire get control on the CPU.
Load Average:
It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.
Response Time:
Amount of time it takes from when a request was submitted until the first response is produced. Remember,
it is the time till the first response and not the completion of process execution (final response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.
Scheduling Algorithms:
To decide which process to execute first and which process to execute last to achieve maximum CPU
utilization, computer scientists have defined some algorithms, and they are:
• First Come First Serve, is just like FIFO (First in First out) Queue data structure, where the data
element which is added to the queue first, is the one who leaves the queue first.
• This is used in Batch Systems.
• It's easy to understand and implement programmatically, using a Queue data structure, where a new
process enters through the tail of the queue, and the scheduler selects process from the head of the
queue.
• A perfect real life example of FCFS scheduling is buying tickets at ticket counter
Program:
#include<iostream>
using namespace std;
int main()
{
int n,bt[20],wt[20],tat[20],avwt=0,avtat=0,i,j;
cout<<"Enter total number of processes(maximum 20):";
cin>>n;
cout<<"\nEnter Process Burst Time\n";
for(i=0;i<n;i++)
{
cout<<"P["<<i+1<<"]:";
cin>>bt[i];
}
avwt/=i;
avtat/=i;
cout<<"\n\nAverage Waiting Time:"<<avwt;
cout<<"\nAverage Turnaround Time:"<<avtat;
return 0;
}
Result:
total+=wt[i];
}
avg_wt=(float)total/n; //average waiting time
total=0;
Priority Scheduling
• Priority is assigned for each process.
• Process with highest priority is executed first and so on.
• Processes with same priority are executed in FCFS manner.
• Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
Program:
#include<iostream>
using namespace std;
int main()
{
int bt[20],p[20],wt[20],tat[20],pr[20],i,j,n,total=0,pos,temp,avg_wt,avg_tat;
cout<<"Enter Total Number of Process:";
cin>>n;
//sorting burst time, priority and process number in ascending order using selection sort
for(i=0;i<n;i++)
{
pos=i;
for(j=i+1;j<n;j++)
{
if(pr[j]<pr[pos])
pos=j;
}
temp=pr[i];
pr[i]=pr[pos];
pr[pos]=temp;
temp=bt[i];
bt[i]=bt[pos];
bt[pos]=temp;
temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}
Result:
// Driver code
int main()
{
// process id's
int processes[] = { 1, 2, 3};
int n = sizeof processes / sizeof processes[0];
Result:
Summary:
CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to
it. The CPU is allocated to the selected process by the dispatcher. First-come, first-served (FCFS)
scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long
processes. Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting
time. Implementing SJF scheduling is difficult, however, because predicting the length of the next CPU
burst is difficult. The SJF algorithm is a special case of the general priority scheduling algorithm, which
simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from
starvation. Aging is a technique to prevent starvation. Round-robin (RR) scheduling is more appropriate
for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready
queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished
the CPU, it is preempted, and the process is put at the tail of the ready queue. The major problem is the
selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling.
If the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.