Chap 9
Chap 9
Chap 9
Chapter 9
1
CPU Scheduling
We concentrate on the problem of
scheduling the usage of a single processor
among all the existing processes in the
system
The goal is to achieve
High processor utilization
High throughput
number of processes completed per unit time
Low response time
timeelapse from the submission of a request to
the beginning of the response
2
Classification of Scheduling Activity
4
Long-Term Scheduling
Determines which programs are admitted
to the system for processing
Controls the degree of multiprogramming
If more processes are admitted
less likely that all processes will be blocked
better CPU usage
each process has less fraction of the CPU
The long term scheduler may attempt to
keep a mix of processor-bound and I/O-
bound processes
5
Medium-Term Scheduling
6
Short-Term Scheduling
Determines which process is going to execute
next (also called CPU scheduling)
Is the subject of this chapter
The short term scheduler is known as the
dispatcher
Is invoked on a event that may lead to choose
another process for execution:
clock interrupts
I/O interrupts
operating system calls and traps
signals
7
Short-Tem Scheduling Criteria
User-oriented
Response Time: Elapsed time from the
submission of a request to the beginning of
response
Turnaround Time: Elapsed time from the
submission of a process to its completion
System-oriented
processor utilization
fairness
throughput: number of process completed per
unit time
8
Priorities
Implemented by having multiple ready
queues to represent each level of priority
Scheduler will always choose a process of
higher priority over one of lower priority
Lower-priority may suffer starvation
Then allow a process to change its priority
based on its age or execution history
Our first scheduling algorithms will not
make use of priorities
We will then present other algorithms that
use dynamic priority mechanisms
9
Characterization of Scheduling Policies
The selection function: determines which process in
the ready queue is selected next for execution
The decision mode: specifies the instants in time at
which the selection function is exercised
Nonpreemptive
Once a process is in the running state, it will
continue until it terminates or blocks itself for I/O
Preemptive
Currently running process may be interrupted and
moved to the Ready state by the OS
Allows for better service since any one process
cannot monopolize the processor for very long
10
The CPU-I/O Cycle
11
Our running example to discuss
various scheduling policies
Arrival Service
Process Time Time
1 0 3
2 2 6
3 4 4
4 6 5
5 8 2
12
First Come First Served (FCFS)
14
Round-Robin
16
Round Robin: critique
Still favors CPU-bound processes
A I/O bound process uses the CPU for a time less
than the time quantum and then is blocked waiting for
I/O
A CPU-bound process run for all its time slice and is
put back into the ready queue (thus getting in front of
blocked processes)
A solution: virtual round robin
When a I/O has completed, the blocked process is
moved to an auxiliary queue which gets preference over
the main ready queue
A process dispatched from the auxiliary queue runs no
longer than the basic time quantum minus the time spent
running since it was selected from the ready queue
17
Queuing for Virtual Round Robin
18
Shortest Process Next (SPN)
19
Estimating the required CPU burst
Let T[i] be the execution time for the ith
instance of this process: the actual duration of
the ith CPU burst of this process
Let S[i] be the predicted value for the ith CPU
burst of this process. The simplest choice is:
S[n+1] = (1/n) S_{i=1 to n} T[i]
To avoid recalculating the entire sum we can
rewrite this as:
S[n+1] = (1/n) T[n] + ((n-1)/n) S[n]
But this convex combination gives equal
weight to each instance
20
Estimating the required CPU burst
But recent instances are more likely to reflect
future behavior
A common technique for that is to use
exponential averaging
S[n+1] = a T[n] + (1-a) S[n] ; 0 < a < 1
more weight is put on recent instances
whenever a > 1/n
By expanding this eqn, we see that weights of
past instances are decreasing exponentially
S[n+1] = aT[n] + (1-a)aT[n-1] + ... (1-a)^{i}aT[n-i] +
... + (1-a)^{n}S[1]
predicted value of 1st instance S[1] is not calculated;
21
usually set to 0 to give priority to to new processes
Exponentially Decreasing Coefficients
22
Exponentially Decreasing Coefficients
25
Multiple Feedback Queues
Recall that
Pj[i] = Bj + (1/2) CPUj[i-1] + GCPUk[i-1]/(4Wk)
The priority decreases as the process and group use
the processor
With more weight Wk, group usage decreases less
the priority
31