Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CPU Scheduling

References:

  1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 6

6.1 Basic Concepts

6.1.1 CPU-I/O Burst Cycle


Figure 6.1 - Alternating sequence of CPU and I/O bursts.


Figure 6.2 - Histogram of CPU-burst durations.

6.1.2 CPU Scheduler

6.1.3. Preemptive Scheduling

6.1.4 Dispatcher

6.2 Scheduling Criteria

6.3 Scheduling Algorithms

The following subsections will explain several common scheduling strategies, looking at only a single CPU burst each for a small number of processes. Obviously real systems have to deal with a lot more simultaneous processes executing their CPU-I/O burst cycles.

6.3.1 First-Come First-Serve Scheduling, FCFS

Process Burst Time
P1
24
P2
3
P3
3

6.3.2 Shortest-Job-First Scheduling, SJF

Process Burst Time
P1
6
P2
8
P3
7
P4
3


Figure 6.3 - Prediction of the length of the next CPU burst.

Process Arrival Time Burst Time
P1
0
8
P2
1
4
P3
2
9
p4
3
5

6.3.3 Priority Scheduling

Process Burst Time Priority
P1
10
3
P2
1
1
P3
2
4
P4
1
5
P5
5
2

6.3.4 Round Robin Scheduling

Process Burst Time
P1
24
P2
3
P3
3


Figure 6.4 - The way in which a smaller time quantum increases context switches.


Figure 6.5 - The way in which turnaround time varies with the time quantum.

6.3.5 Multilevel Queue Scheduling


Figure 6.6 - Multilevel queue scheduling

6.3.6 Multilevel Feedback-Queue Scheduling


Figure 6.7 - Multilevel feedback queues.

6.4 Thread Scheduling

6.4.1 Contention Scope

6.4.2 Pthread Scheduling


Figure 6.8 - Pthread scheduling API.

6.5 Multiple-Processor Scheduling

6.5.1 Approaches to Multiple-Processor Scheduling

6.5.2 Processor Affinity


Figure 6.9 - NUMA and CPU scheduling.

6.5.3 Load Balancing

6.5.4 Multicore Processors


Figure 6.10 - Memory stall.


Figure 6.11 - Multithreaded multicore system.

/* Old 5.5.5 Virtualization and Scheduling ( Optional, Omitted from 9th edition )

*/

6.6 Real-Time CPU Scheduling

Real-time systems are those in which the time at which tasks complete is crucial to their performance

6.6.1 Minimizing Latency


Figure 6.12 - Event latency.


Figure 6.13 - Interrupt latency.


Figure 6.14 - Dispatch latency.

6.6.2 Priority-Based Scheduling


Figure 6.15 - Periodic task.

 

6.6.3 Rate-Monotonic Scheduling


Figure 6.16 - Scheduling of tasks when P2 has a higher priority than P1.


Figure 6.17 - Rate-monotonic scheduling.


Figure 6.18 - Missing deadlines with rate-monotonic scheduling.

6.6.4 Earliest-Deadline-First Scheduling


Figure 6.19 - Earliest-deadline-first scheduling.

6.6.5 Proportional Share Scheduling

6.6.6 POSIX Real-Time Scheduling


Figure 6.20 - POSIX real-time scheduling API.

6.7 Operating System Examples ( Optional )

6.7.1 Example: Linux Scheduling ( was 5.6.3 )

CFS ( Completely Fair Scheduler ) Performance

The Linux CFS scheduler provides an efficient algorithm for selecting which
task to run next. Each runnable task is placed in a red-black tree—a balanced
binary search tree whose key is based on the value of vruntime. This tree is
shown below:

When a task becomes runnable, it is added to the tree.  If a task on the
tree is not runnable ( for example, if it is blocked while waiting for I/O ), it is
removed.  Generally speaking, tasks that have been given less processing time
( smaller values of vruntime ) are toward the left side of the tree, and tasks
that have been given more processing time are on the right side. According
to the properties of a binary search tree, the leftmost node has the smallest
key value, which for the sake of the CFS scheduler means that it is the task
with the highest priority . Because the red-black tree is balanced, navigating
it to discover the leftmost node will require O(lgN) operations (where N
is the number of nodes in the tree). However, for efficiency reasons, the
Linux scheduler caches this value in the variable rb_leftmost, and thus
determining which task to run next requires only retrieving the cached value.

New sidebar in ninth edition


Figure 6.21 - Scheduling priorities on a Linux system.


Replaced by Figure 6.21 in the ninth edition. Was Figure 5.15 in eighth edition

/* The Following material was omitted from the ninth edition:

    • A runnable task is considered eligible for execution as long as it has not consumed all the time available in it's time slice. Those tasks are stored in an active array, indexed according to priority.
    • When a process consumes its time slice, it is moved to an expired array. The tasks priority may be re-assigned as part of the transferal.
    • When the active array becomes empty, the two arrays are swapped.
    • These arrays are stored in runqueue structures. On multiprocessor machines, each processor has its own scheduler with its own runqueue.


Omitted from ninth edition. Figure 5.16 in eighth edition.

*/

6.7.2 Example: Windows XP Scheduling ( was 5.6.2 )


Figure 6.22 - Windows thread priorities.

6.7.1 Example: Solaris Scheduling


Figure 6.23 - Solaris dispatch table for time-sharing and interactive threads.


Figure 6.24 - Solaris scheduling.

6.8 Algorithm Evaluation

6.8.1 Deterministic Modeling

Process Burst Time
P1
10
P2
29
P3
3
P4
7
P5
12

 

FCFS:

Non-preemptive SJF:

Round Robin:

6.8.2 Queuing Models

N = Lambda * W

6.8.3 Simulations


Figure 6.25 - Evaluation of CPU schedulers by simulation.

6.8.4 Implementation

6.9 Summary


Material Omitted from the Eighth Edition:

Was 5.4.4 Symmetric Multithreading ( Omitted from 8th edition )


omitted