Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CH 6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

0

Operating Systems
Chapter 6:

CPU Scheduling

Dr. Essam Halim Houssein


Lecturer, Faculty of Computers and Informatics,
Benha University
2

Basic Concepts

In a single-processor system, only one process can run

at a time; any others must wait until the CPU is free and

can be rescheduled. The objective of

multiprogramming is to have some process running at

all times, to maximize CPU utilization. With

multiprogramming, we try to use this time productively.

Several processes are kept in memory at one time.


3

When one process has to wait, the operating system takes


the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one
process has to wait, another process can take over use of the
CPU.
Scheduling of this kind is a fundamental operating-system
function. Almost all computer resources are scheduled
before use. The CPU is, of course, one of the primary
computer resources. Thus, its scheduling is central to
operating-system design.
4

CPU-I/O Burst Cycle


The success of CPU scheduling depends on an observed
property of processes: process execution consists of a cycle
of CPU execution and I/O wait. Processes alternate between
these two states. Process execution begins with a CPU
burst.
That is followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a system request
to terminate execution (Figure 6.1).
5

CPU burst followed by I/O burst


6

CPU Scheduler

Whenever the CPU becomes idle, the operating system

must select one of the processes in the ready queue to be

executed. The selection process is carried out by the short-

term scheduler (or CPU scheduler). The scheduler

selects a process from the processes in memory that are

ready to execute and allocates the CPU to that process.


7

Preemptive Scheduling
CPU-scheduling decisions may take place under the
following four circumstances:
1. When a process switches from the running state to
the waiting state (for example, as the result of an I/O
request or an invocation of wait for the termination of
one of the child processes)
2. When a process switches from the running state to
the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to
the ready state (for example, at completion of I/O)
4. When a process terminates
8

For situations 1 and 4, there is no choice in terms of

scheduling. A new process (if one exists in the ready queue)

must be selected for execution. There is a choice, however, for

situations 2 and 3.

When scheduling takes place only under circumstances 1

and 4, we say that the scheduling scheme is

nonpreemptive or cooperative; otherwise, it is

preemptive.
9

Dispatcher
Another component involved in the CPU-scheduling
function is the dispatcher. The dispatcher is the module
that gives control of the CPU to the process selected by the
short-term scheduler.
The dispatcher should be as fast as possible, since it is
invoked during every process switch. The time it takes for
the dispatcher to stop one process and start another
running is known as the dispatch latency.
10

Scheduling criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per
time unit
• Turnaround time – amount of time to execute a particular
process
• Waiting time – amount of time a process has been waiting in
the ready queue
• Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)
11

Scheduling Algorithm Optimization Criteria

• Max CPU utilization

• Max throughput

• Min turnaround time

• Min waiting time

• Min response time


12

CPU-scheduling algorithms
1- First-Come, First-Served Scheduling (FCFS)

2- Shortest-Job-First Scheduling (SJF)

3- Priority Scheduling

4- Round-Robin Scheduling (RR)

5- Multilevel Queue Scheduling

6- Multilevel feedback queue scheduling


13

1- First- Come, First-Served (FCFS) Scheduling


Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
14

Suppose that the processes arrive in the order:


P2 , P3 , P1
• The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
▫ Consider one CPU-bound and many I/O-bound processes
15

2- Shortest-Job-First Scheduling (SJF)

Process Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

• SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


16

• Now we add the concepts of varying arrival times and


preemption to the analysis
ProcessAarri Arrival TimeTBurst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4


= 6.5 msec
17

3- Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest
priority (smallest integer  highest priority)
▫ Preemptive
▫ Nonpreemptive
• SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
• Problem  Starvation – low priority processes may never
execute
• Solution  Aging – as time progresses increase the priority
of the process
18

ProcessAarri Burst TimeTPriority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

• Priority scheduling Gantt Chart

P1 P2 P1 P3 P4
0 1 6 16 18 19

• Average waiting time = 8.2 msec


19

A major problem with priority scheduling algorithms is

indefinite blocking, or starvation. A process that is ready

to run but waiting for the CPU can be considered blocked. A

priority scheduling algorithm can leave some low priority

processes waiting indefinitely.


20

A solution to the problem of indefinite blockage of low-priority


processes is aging. Aging is a technique of gradually
increasing the priority of processes that wait in the system for
a long time. For example, if priorities range from 127 (low) to
0 (high), we could increase the priority of a waiting process by
1 every 15 minutes. Eventually, even a process with an initial
priority of 127 would have the highest priority in the system
and would be executed. In fact, it would take no more than 32
hours for a priority-127 process to age to a priority-0 process.
21

4- Round-Robin Scheduling (RR)


• Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the
end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time
in chunks of at most q time units at once. No process
waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
▫ q large  FIFO
▫ q small  q must be large with respect to context
switch, otherwise overhead is too high
22

Process Burst Time


P1 24
P2 3
P3 3
• The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Typically, higher average turnaround than SJF, but better


response
• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 msec
23

5- Multilevel Queue Scheduling


Another class of scheduling algorithms has been created for
situations in which processes are easily classified into
different groups. For example, a common division is made
between foreground (interactive) processes and
background (batch) processes. These two types of processes
have different response-time requirements and so may
have different scheduling needs. In addition, foreground
processes may have priority (externally defined) over
background processes.
24

A multilevel queue scheduling algorithm partitions


the ready queue into several separate queues. The
processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size,
process priority, or process type. Each queue has its own
scheduling algorithm. For example, separate queues
might be used for foreground and background processes. The
foreground queue might be scheduled by an RR algorithm,
while the background queue is scheduled by an FCFS
algorithm.
25
26

6- Multilevel feedback queue scheduling


• A process can move between the various queues; aging can be
implemented this way
• Multilevel-feedback-queue scheduler defined by the following
parameters:
▫ number of queues
▫ scheduling algorithms for each queue
▫ method used to determine when to upgrade a process
▫ method used to determine when to demote a process
▫ method used to determine which queue a process will enter
when that process needs service
27

• Three queues:
▫ Q0 – RR with time quantum 8
milliseconds
▫ Q1 – RR time quantum 16 milliseconds
▫ Q2 – FCFS
• Scheduling
▫ A new job enters queue Q0 which is
served FCFS
 When it gains CPU, job receives 8
milliseconds
 If it does not finish in 8 milliseconds,
job is moved to queue Q1
▫ At Q1 job is again served FCFS and
receives 16 additional milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2
Algorithm Evaluation
• How to select CPU-scheduling algorithm for an OS?
• Determine criteria, then evaluate algorithms
• Deterministic modeling
▫ Type of analytic evaluation
▫ Takes a particular predetermined workload and defines the
performance of each algorithm for that workload
• Consider 5 processes arriving at time 0:
Deterministic Evaluation
 For each algorithm, calculate minimum average waiting time
 Simple and fast, but requires exact numbers for input, applies
only to those inputs
 FCS is 28ms:

 Non-preemptive SFJ is 13ms:

 RR is 23ms:
Simulations
• Queueing models limited
• Simulations more accurate
▫ Programmed model of computer system
▫ Clock is a variable
▫ Gather statistics indicating algorithm performance
▫ Data to drive simulation gathered via
 Random number generator according to probabilities
 Distributions defined mathematically or empirically
 Trace tapes record sequences of real events in real
systems
Implementation
 Even simulations have limited accuracy
 Just implement new scheduler and test in real systems
 High cost, high risk
 Environments vary
 Most flexible schedulers can be modified per-site or per-
system
 Or APIs to modify priorities
 But again environments vary
Any Questions?

You might also like