5.cpu Scheduling
5.cpu Scheduling
5.cpu Scheduling
CPU Scheduling
In a single-processor system, only one process can run at a time. Others must wait until the CPU
is free and can be rescheduled. The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization. The idea is relatively simple. A process is
executed until it must wait, typically for the completion of some I/O request. In a simple
computer system, the CPU then just sits idle. All this waiting time is wasted; no useful work is
accomplished. With multiprogramming, we try to use this time productively. Several processes
are kept in memory at one time. When one process has to wait, the operating system takes the
CPU away from that process and gives the CPU to another process. This pattern continues.
Every time one process has to wait, another process can take over use of the CPU. Scheduling of
this kind is a fundamental operating-system function. Almost all computer resources are
scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its
scheduling is central to operating-system design.
The success of CPU scheduling depends on an observed property of processes: process execution
consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states.
Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed
by another CPU burst, then another I/O burst, and so on. Eventually, the final CPU burst ends
with a system request to terminate execution (Figure 6.1). The durations of CPU bursts have
been measured extensively. Although they vary greatly from process to process and from
computer to computer, they tend to have a frequency curve similar to that shown in Figure 6.2.
The curve is generally characterized as exponential or hyper exponential, with a large number of
short CPU bursts and a small number of long CPU bursts.
An I/O-bound program typically has many short CPU bursts. A CPU-bound program might have
a few long CPU bursts. This distribution can be important in the selection of an appropriate
CPU-scheduling algorithm.
2 CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term scheduler, or
CPU scheduler. The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
Note that the ready queue is not necessarily a first-in, first-out (FIFO)queue. As we shall see
when we consider the various scheduling algorithms, a ready queue can be implemented as a
FIFO queue, a priority queue, a tree, or simply an unordered linked list. Conceptually, however,
all the processes in the ready queue are lined up waiting for a chance to run on the CPU. The
records in the queues are generally process control blocks (PCBs) of the processes.
3. Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the result
of an I/O request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an
interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion
of I/O)
4. When a process terminates
For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in
the ready queue) must be selected for execution. There is a choice, however, for situations 2 and
3.
When scheduling takes place only under circumstances 1 and 4, we say that the scheduling
scheme is non preemptive or cooperative. Otherwise, it is preemptive. Under non preemptive
scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it
releases the CPU either by terminating or by switching to the waiting state. This scheduling
method was used by Microsoft Windows 3.x. Windows 95 introduced preemptive scheduling,
and all subsequent versions of Windows operating systems have used preemptive scheduling.
The Mac OS X operating system for the Macintosh also uses preemptive scheduling; previous
versions of the Macintosh operating system relied on cooperative scheduling. Cooperative
scheduling is the only method that can be used on certain hardware platforms, because it does
not require the special hardware (for example, a timer) needed for preemptive scheduling.
Unfortunately, preemptive scheduling can result in race conditions when data are shared among
several processes. Consider the case of two processes that share data. While one process is
updating the data, it is preempted so that the second process can run. The second process then
tries to read the data, which are in an inconsistent state.
Preemption also affects the design of the operating-system kernel. During the processing of a
system call, the kernel maybe busy with an activity on behalf of a process. Such activities may
involve changing important kernel data (for instance, I/O queues). What happens if the process is
preempted in the middle of these changes and the kernel (or the device driver) needs to read or
modify the same structure? Chaos ensues. Certain operating systems, including most versions of
UNIX, deal with this problem by waiting either for a system call to complete or for an I/O block
to take place before doing a context switch. This scheme ensures that the kernel structure is
simple, since the kernel will not preempt a process while the kernel data structures are in an
inconsistent state. Unfortunately, this kernel-execution model is a poor one for supporting real-
time computing where tasks must complete execution within a given time frame
Because interrupts can, by definition, occur at any time, and because they cannot always be
ignored by the kernel, the sections of code affected by interrupts must be guarded from
simultaneous use. The operating system needs to accept interrupts at almost all times. Otherwise,
input might be lost or output overwritten. So that these sections of code are not accessed
concurrently by several processes, they disable interrupts at entry and reenable interrupts at exit.
It is important to note that sections of code that disable interrupts do not occur very often and
typically contain few instructions.
4. Dispatcher
Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short-term scheduler.
This function involves the following:
1.Switching context
3. Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.