Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
12 views36 pages

Chapter 06

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 36

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling

• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
• Thread Scheduling
• Multiple-Processor Scheduling
• Real-Time CPU Scheduling
• Operating Systems Examples
• Algorithm Evaluation

2
Objectives

• To introduce CPU scheduling, which is the basis for multiprogrammed operating systems
• To describe various CPU-scheduling algorithms
• To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
• To examine the scheduling algorithms of several operating systems

3
Basic Concepts

• Maximum CPU utilization obtained with


multiprogramming
• CPU–I/O Burst Cycle – Process execution consists of a
cycle of CPU execution and I/O wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main concern

4
Histogram of CPU-burst Times

5
CPU Scheduler

• Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one of
them
• Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive
• Consider access to shared data
• Consider preemption while in kernel mode
• Consider interrupts occurring during crucial OS activities

6
Dispatcher

• Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another running

7
Scheduling Criteria

• CPU utilization – keep the CPU as busy as possible


• Throughput – # of processes that complete their execution per time unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)

8
Scheduling Algorithm Optimization Criteria

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time

9
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17

10
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order:


P2 , P3 , P1
• The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound processes

11
Shortest-Job-First (SJF) Scheduling

• Associate with each process the length of its next CPU burst
• Use these lengths to schedule the process with the shortest time
• SJF is optimal – gives minimum average waiting time for a given set of processes
• The difficulty is knowing the length of the next CPU request
• Could ask the user

12
Example of SJF

ProcessArrival Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

• SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

13
Determining Length of Next CPU Burst

• Can only estimate the length – should be similar to the previous one
• Then pick process with shortest predicted next CPU burst

• Can be done by using the length of previous CPU bursts, using exponential averaging

1. t n = actual length of n th CPU burst


2.  n +1 = predicted value for the next CPU burst
3.  , 0    1
4. Define : 𝜏𝑛+1 = 𝛼 𝑡𝑛 + 1 − 𝛼 𝜏𝑛 .
• Commonly, α set to ½
• Preemptive version called shortest-remaining-time-first

14
Prediction of the Length of the Next CPU Burst

15
Examples of Exponential Averaging
•  =0
• n+1 = n
• Recent history does not count
•  =1
• n+1 =  tn
• Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0

• Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its
predecessor

16
Example of Shortest-remaining-time-first

• Now we add the concepts of varying arrival times and preemption to the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

• Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 msec

17
Priority Scheduling

• A priority number (integer) is associated with each process

• The CPU is allocated to the process with the highest priority (smallest integer  highest priority)
• Preemptive
• Nonpreemptive

• SJF is priority scheduling where priority is the inverse of predicted next CPU burst time

• Problem  Starvation – low priority processes may never execute

• Solution  Aging – as time progresses increase the priority of the process

18
Example of Priority Scheduling

ProcessA arri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

• Priority scheduling Gantt Chart

• Average waiting time = 8.2 msec

19
Round Robin (RR)

• Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
• q large  FIFO
• q small  q must be large with respect to context switch, otherwise overhead is too high

20
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
• The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

• Typically, higher average turnaround than SJF, but better response


• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec

21
Time Quantum and Context Switch Time

22
Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter than q

23
Multilevel Queue

• Ready queue is partitioned into separate queues, eg:


• foreground (interactive)
• background (batch)
• Process permanently in a given queue
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to
foreground in RR
• 20% to background in FCFS

24
Multilevel Queue Scheduling

25
Multilevel Feedback Queue

• A process can move between the various queues; aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that process needs service

26
Example of Multilevel Feedback Queue

• Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS

• Scheduling
• A new job enters queue Q0 which is served FCFS
• When it gains CPU, job receives 8 milliseconds
• If it does not finish in 8 milliseconds, job is moved to queue Q1
• At Q1 job is again served FCFS and receives 16 additional milliseconds
• If it still does not complete, it is preempted and moved to queue Q2

27
Thread Scheduling

• Distinction between user-level and kernel-level threads


• When threads supported, threads scheduled, not processes
• Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP
• Known as process-contention scope (PCS) since scheduling competition is within the process
• Typically done via priority set by programmer
• Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all
threads in system

28
Pthread Scheduling

• API allows specifying either PCS or SCS during thread creation


• PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling
• PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling
• Can be limited by OS – Linux and Mac OS X only allow PTHREAD_SCOPE_SYSTEM

29
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling scope\n");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope value.\n");
}

30
Pthread Scheduling API
/* set the scheduling algorithm to PCS or SCS */
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}

31
Multiple-Processor Scheduling

• CPU scheduling more complex when multiple CPUs are available


• Homogeneous processors within a multiprocessor
• Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need
for data sharing
• Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready
queue, or each has its own private queue of ready processes
• Currently, most common
• Processor affinity – process has affinity for processor on which it is currently running
• soft affinity
• hard affinity
• Variations including processor sets

32
NUMA and CPU Scheduling

Note that memory-placement algorithms can also consider affinity

33
Multiple-Processor Scheduling – Load Balancing

• If SMP, need to keep all CPUs loaded for efficiency


• Load balancing attempts to keep workload evenly distributed
• Push migration – periodic task checks load on each processor, and if found pushes task from overloaded
CPU to other CPUs
• Pull migration – idle processors pulls waiting task from busy processor

34
Multicore Processors

• Recent trend to place multiple processor cores on same physical chip


• Faster and consumes less power
• Multiple threads per core also growing
• Takes advantage of memory stall to make progress on another thread while memory retrieve happens

35
Multithreaded Multicore System

36

You might also like