Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
75 views

Term Paper

The document discusses CPU scheduling in operating systems. It begins with an introduction that defines CPU scheduling as allowing one process to use the CPU while another waits for resources like I/O. This allows the CPU to be utilized even when a process is waiting. The document then discusses why CPU scheduling is needed to make multi-programming systems more efficient by allowing the CPU to be used while processes wait for I/O. It describes the four types of CPU scheduling decisions and defines preemptive and non-preemptive scheduling. Finally, it discusses common CPU scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR) scheduling. It
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Term Paper

The document discusses CPU scheduling in operating systems. It begins with an introduction that defines CPU scheduling as allowing one process to use the CPU while another waits for resources like I/O. This allows the CPU to be utilized even when a process is waiting. The document then discusses why CPU scheduling is needed to make multi-programming systems more efficient by allowing the CPU to be used while processes wait for I/O. It describes the four types of CPU scheduling decisions and defines preemptive and non-preemptive scheduling. Finally, it discusses common CPU scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR) scheduling. It
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Topic:

CPU SCHEDULING

Submitted From:
M.Hussnain Shabbir
173328
Submitted To:
Ma’am Huma
Introduction:
CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive. We are discussing the CPU-
scheduling concepts and present several CPU-scheduling algorithms. We also consider the problem of
selecting an algorithm for a particular system. We studied the threads to the process model. On operating
systems that support them, it is kernel-level threads—not processes—that are in fact being scheduled by
the operating system. However, the terms "process scheduling" and "thread scheduling" are often used
interchangeably. Now, we use process scheduling when discussing general scheduling concepts and thread
scheduling to refer to thread-specific ideas.

CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the execution of another
process is on hold (in waiting state) due to unavailability of any resource like I/O etc., thereby making full
use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready
queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler).
The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU
to one of them.

Why do we need scheduling?


A typical process involves both I/O time and CPU time. In a uni-programming system (single task/job in
the main memory at a time.) like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this
time. In multiprogramming systems, one process can use CPU while another is waiting for I/O. This is
possible only with process scheduling.

Types of CPU Scheduling


CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for I/O request or invocation
of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs).
3. When a process switches from the waiting state to the ready state (for example, completion of I/O).
4. When a process terminates.
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the
ready queue) must be selected for execution. There is a choice, however in circumstances 2 and 3.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is non-
preemptive; otherwise the scheduling scheme is preemptive.
Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the
CPU until it releases the CPU either by terminating or by switching to the waiting state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh operating
systems.
It is the only method that can be used on certain hardware platforms, because It does not require the special
hardware (for example: a timer) needed for preemptive scheduling.

Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary to run a
certain task that has a higher priority before another task although it is running. Therefore, the running task
is interrupted for some time and resumed later when the priority task has finished its execution.

Scheduling Criteria:
There are many different criteria to check when considering the "best" scheduling algorithm, they are:

CPU Utilization:
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the time
(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly loaded)
to 90% (heavily loaded.)

Throughput:
It is the total number of processes completed per unit time or rather say total amount of work done in a unit
of time. This may range from 10/second to 1/hour depending on the specific processes.

Turnaround Time:
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of
the process to the time of completion of the process (Wall clock time).

Waiting Time:
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the
ready queue to acquire get control on the CPU.

Load Average:
It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.

Response Time:
Amount of time it takes from when a request was submitted until the first response is produced. Remember,
it is the time till the first response and not the completion of process execution (final response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.

Scheduling Algorithms:
To decide which process to execute first and which process to execute last to achieve maximum CPU
utilization, computer scientists have defined some algorithms, and they are:

• First Come First Serve (FCFS) Scheduling


• Shortest-Job-First (SJF) Scheduling
• Priority Scheduling
• Round Robin (RR) Scheduling
• Multilevel Queue Scheduling
• Multilevel Feedback Queue Scheduling

First Come First Serve Scheduling


In the "First come first serve" scheduling algorithm, as the name suggests, the process which arrives first,
gets executed first, or we can say that the process which requests the CPU first, gets the CPU allocated first.

• First Come First Serve, is just like FIFO (First in First out) Queue data structure, where the data
element which is added to the queue first, is the one who leaves the queue first.
• This is used in Batch Systems.
• It's easy to understand and implement programmatically, using a Queue data structure, where a new
process enters through the tail of the queue, and the scheduler selects process from the head of the
queue.
• A perfect real life example of FCFS scheduling is buying tickets at ticket counter

Program:
#include<iostream>
using namespace std;
int main()
{
int n,bt[20],wt[20],tat[20],avwt=0,avtat=0,i,j;
cout<<"Enter total number of processes(maximum 20):";
cin>>n;
cout<<"\nEnter Process Burst Time\n";
for(i=0;i<n;i++)
{
cout<<"P["<<i+1<<"]:";
cin>>bt[i];
}

wt[0]=0; //waiting time for first process is 0

//calculating waiting time


for(i=1;i<n;i++)
{
wt[i]=0;
for(j=0;j<i;j++)
wt[i]+=bt[j];
}

cout<<"\nProcess\t\tBurst Time\tWaiting Time\tTurnaround Time";

//calculating turnaround time


for(i=0;i<n;i++)
{
tat[i]=bt[i]+wt[i];
avwt+=wt[i];
avtat+=tat[i];
cout<<"\nP["<<i+1<<"]"<<"\t\t"<<bt[i]<<"\t\t"<<wt[i]<<"\t\t"<<tat[i];
}

avwt/=i;
avtat/=i;
cout<<"\n\nAverage Waiting Time:"<<avwt;
cout<<"\nAverage Turnaround Time:"<<avtat;

return 0;
}

Result:

Shortest Job First (SJF) Scheduling


Shortest Job First scheduling works on the process with the shortest burst time or duration first.

• This is the best approach to minimize waiting time.


• This is used in Batch Systems.
• It is of two types:
o Non Pre-emptive
o Pre-emptive
• To successfully implement it, the burst time/duration time of the processes should be known to the
processor in advance, which is practically not feasible all the time.
• This scheduling algorithm is optimal if all the jobs/processes are available at the same time. (either
Arrival time is 0 for all, or Arrival time is same for all)
Program:
#include<iostream>
using namespace std;
int main()
{
int bt[20],p[20],wt[20],tat[20],i,j,n,total=0,pos,temp;
float avg_wt,avg_tat;
cout<<"Enter number of process:";
cin>>n;

cout<<"\nEnter Burst Time:\n";


for(i=0;i<n;i++)
{
cout<<"p "<<i+1<<" : ";
cin>>bt[i];
p[i]=i+1; //contains process number
}
//sorting burst time in ascending order using selection sort
for(i=0;i<n;i++)
{
pos=i;
for(j=i+1;j<n;j++)
{
if(bt[j]<bt[pos])
pos=j;
}
temp=bt[i];
bt[i]=bt[pos];
bt[pos]=temp;
temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}

wt[0]=0; //waiting time for first process will be zero

//calculate waiting time


for(i=1;i<n;i++)
{
wt[i]=0;
for(j=0;j<i;j++)
wt[i]+=bt[j];

total+=wt[i];
}
avg_wt=(float)total/n; //average waiting time
total=0;

cout<<"\nProcess\t Burst Time \tWaiting Time\tTurnaround Time";


for(i=0;i<n;i++)
{
tat[i]=bt[i]+wt[i]; //calculate turnaround time
total+=tat[i];
cout<<"\nP["<<i+1<<"]"<<"\t\t"<<bt[i]<<"\t\t"<<wt[i]<<"\t\t"<<tat[i];
}
avg_tat=(float)total/n; //average turnaround time
cout<<"\n\nAverage Waiting Time= "<< avg_wt;
cout<<"\nAverage Turnaround Time= \n"<< avg_tat;
}
Result:

Priority Scheduling
• Priority is assigned for each process.
• Process with highest priority is executed first and so on.
• Processes with same priority are executed in FCFS manner.
• Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Program:
#include<iostream>
using namespace std;
int main()
{
int bt[20],p[20],wt[20],tat[20],pr[20],i,j,n,total=0,pos,temp,avg_wt,avg_tat;
cout<<"Enter Total Number of Process:";
cin>>n;

cout<<"\nEnter Burst Time and Priority\n";


for(i=0;i<n;i++)
{
cout<<"\nP["<<i+1<<"]\n";
cout<<"Burst Time:";
cin>>bt[i];
cout<<"Priority:";
cin>>pr[i];
p[i]=i+1; //contains process number
}

//sorting burst time, priority and process number in ascending order using selection sort
for(i=0;i<n;i++)
{
pos=i;
for(j=i+1;j<n;j++)
{
if(pr[j]<pr[pos])
pos=j;
}

temp=pr[i];
pr[i]=pr[pos];
pr[pos]=temp;

temp=bt[i];
bt[i]=bt[pos];
bt[pos]=temp;

temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}

wt[0]=0; //waiting time for first process is zero

//calculate waiting time


for(i=1;i<n;i++)
{
wt[i]=0;
for(j=0;j<i;j++)
wt[i]+=bt[j];
total+=wt[i];
}
avg_wt=total/n; //average waiting time
total=0;
cout<<"\nProcess\t Burst Time \tWaiting Time\tTurnaround Time";
for(i=0;i<n;i++)
{
tat[i]=bt[i]+wt[i]; //calculate turnaround time
total+=tat[i];
cout<<"\nP["<<p[i]<<"]\t\t "<<bt[i]<<"\t\t "<<wt[i]<<"\t\t\t"<<tat[i];
}
avg_tat=total/n; //average turnaround time
cout<<"\n\nAverage Waiting Time="<<avg_wt;
cout<<"\nAverage Turnaround Time="<<avg_tat;
return 0;
}

Result:

Round Robin Scheduling


• A fixed time is allotted to each process, called quantum, for execution.
• Once a process is executed for given time period that process is preempted and other process
executes for given time period.
• Context switching is used to save states of preempted processes.
Program:
// C++ program for implementation of RR scheduling
#include<iostream>
using namespace std;

// Function to find the waiting time for all


// processes
void findWaitingTime(int processes[], int n,
int bt[], int wt[], int quantum)
{
// Make a copy of burst times bt[] to store remaining
// burst times.
int rem_bt[n];
for (int i = 0 ; i < n ; i++)
rem_bt[i] = bt[i];

int t = 0; // Current time

// Keep traversing processes in round robin manner


// until all of them are not done.
while (1)
{
bool done = true;

// Traverse all processes one by one repeatedly


for (int i = 0 ; i < n; i++)
{
// If burst time of a process is greater than 0
// then only need to process further
if (rem_bt[i] > 0)
{
done = false; // There is a pending process

if (rem_bt[i] > quantum)


{
// Increase the value of t i.e. shows
// how much time a process has been processed
t += quantum;

// Decrease the burst_time of current process


// by quantum
rem_bt[i] -= quantum;
}

// If burst time is smaller than or equal to


// quantum. Last cycle for this process
else
{
// Increase the value of t i.e. shows
// how much time a process has been processed
t = t + rem_bt[i];

// Waiting time is current time minus time


// used by this process
wt[i] = t - bt[i];

// As the process gets fully executed


// make its remaining burst time = 0
rem_bt[i] = 0;
}
}
}

// If all processes are done


if (done == true)
break;
}
}

// Function to calculate turn around time


void findTurnAroundTime(int processes[], int n,
int bt[], int wt[], int tat[])
{
// calculating turnaround time by adding
// bt[i] + wt[i]
for (int i = 0; i < n ; i++)
tat[i] = bt[i] + wt[i];
}

// Function to calculate average time


void findavgTime(int processes[], int n, int bt[], int quantum)
{
int wt[n], tat[n], total_wt = 0, total_tat = 0;

// Function to find waiting time of all processes


findWaitingTime(processes, n, bt, wt, quantum);

// Function to find turn around time for all processes


findTurnAroundTime(processes, n, bt, wt, tat);
// Display processes along with all details
cout << "Processes "<< " Burst time "
<< " Waiting time " << " Turn around time\n";

// Calculate total waiting time and total turn


// around time
for (int i=0; i<n; i++)
{
total_wt = total_wt + wt[i];
total_tat = total_tat + tat[i];
cout << " " << i+1 << "\t\t" << bt[i] <<"\t "
<< wt[i] <<"\t\t " << tat[i] <<endl;
}

cout << "Average waiting time = "


<< (float)total_wt / (float)n;
cout << "\nAverage turn around time = "
<< (float)total_tat / (float)n;
}

// Driver code
int main()
{
// process id's
int processes[] = { 1, 2, 3};
int n = sizeof processes / sizeof processes[0];

// Burst time of all processes


int burst_time[] = {10, 5, 8};
// Time quantum
int quantum = 2;
findavgTime(processes, n, burst_time, quantum);
return 0;
}

Result:

Summary:
CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to
it. The CPU is allocated to the selected process by the dispatcher. First-come, first-served (FCFS)
scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long
processes. Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting
time. Implementing SJF scheduling is difficult, however, because predicting the length of the next CPU
burst is difficult. The SJF algorithm is a special case of the general priority scheduling algorithm, which
simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from
starvation. Aging is a technique to prevent starvation. Round-robin (RR) scheduling is more appropriate
for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready
queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished
the CPU, it is preempted, and the process is put at the tail of the ready queue. The major problem is the
selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling.
If the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.

You might also like