CH 5. CPUscheduling
CH 5. CPUscheduling
CH 5. CPUscheduling
CPU Scheduling
CPU Scheduling
• Scheduling is a task of selecting a process from
a ready queue and letting it run on the CPU
– Done by a scheduler
• Types of Scheduling
– Non-preemptive scheduler
• Process remains scheduled until voluntarily relinquishes
CPU
– Preemptive scheduler
• Process may be descheduled at any time
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their
execution per time unit
• Turnaround time – The interval from the time of
submission of a process to the time of completion.
– Sum of the periods spent waiting to get into memory, waiting
in the ready queue, executing on the CPU, and doing I/O.
• Waiting time – amount of time a process has been
waiting in the ready queue
• Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)
Gantt Chart
• Illustrates how processes/jobs are scheduled
over time on CPU
A B C
0 10 12 16
Time
First- Come, First-Served (FCFS) Scheduling
• Jobs are executed on first come, first serve basis.
• Easy to understand and implement.
• Poor in performance as average wait time is high.
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 ,
P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Multilevel Queue
• Ready queue is partitioned into separate queues, e.g.:
– foreground (interactive)
– background (batch)
• Process permanently stay in a given queue
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues:
– Fixed priority preemptive scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation.
– Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes;
• Like, 80% to foreground in RR
• 20% to background in FCFS
Multilevel Feedback Queue
• A process can move between the various queues;
• A process that waits too long in a lower-priority
queue may be moved to a higher-priority queue.
– Aging can be implemented to prevent starvation.
• Multilevel-feedback-queue scheduler defined by
the following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will
enter when that process needs service
Example of Multilevel Feedback Queue
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0
• When it gains CPU, job receives 8
milliseconds
• If it does not finish in 8 milliseconds, job is
moved to queue Q1
– At Q1 job is again served and receives 16
additional milliseconds
• If it still does not complete, it is preempted
and moved to queue Q2
Multiple-Processor Scheduling
• CPU scheduling becomes more complex when
multiple CPUs are Involved
• Approaches to Multiple-processor scheduling
– Asymmetric multiprocessing, in which one processor is
the master, controlling all activities and running all
kernel code, while the other runs only user code.
• This approach is relatively simple, as there is no need to share
critical system data.
– Symmetric multiprocessing, SMP, where each
processor schedules its own jobs, either from a
common ready queue or from separate ready queues
for each processor.
• Currently, most common (Windows, Linux, and Mac OS X)
•
Processor affinity
Processors contain cache memory, which speeds up
repeated accesses to the same memory locations.
• If a process has to switch from one processor to another
each time it got a time slice, the data in the cache ( for
that process ) would have to be invalidated and re-loaded
from main memory, thereby losing the benefit of the
cache.
• Therefore systems attempt to keep processes on the
same processor, via processor affinity.
– Soft affinity - the system attempts to keep processes on the
same processor but makes no guarantees.
– Hard affinity - a process specifies that it is not to be moved
between processors .E.g. Linux and some other Oses
NUMA and CPU Scheduling
• Main memory architecture can also affect
process affinity.
• The CPUs on a chip or board can access the
memory on that board faster than they can
access memory on other boards in the system.
• This is known as NUMA ( Non-Uniform Memory
Access)
• If a process has an affinity for a particular CPU,
then it should preferentially be assigned memory
storage in "local" fast access areas.
Multiple-Processor Scheduling – Load Balancing
RR is 23ms:
Queueing Models
• A study of historical performance can often
produce statistical descriptions of certain
important parameters, such as
– the rate at which new processes arrive,
– the ratio of CPU bursts to I/O times,
– the distribution of CPU burst times and I/O burst
times, etc.