The document discusses process scheduling in operating systems. It describes how the scheduler determines which process moves from the ready state to the running state on the CPU. The main goals of scheduling are to keep the CPU busy and provide minimum response times. There are non-preemptive and preemptive schedulers. Processes exist in different queues depending on their state. The three main types of schedulers are long-term, short-term, and medium-term schedulers. Common scheduling algorithms include first-come, first-served; shortest job first; priority; round robin; and multilevel queue scheduling.
1 of 10
More Related Content
Process scheduling
1. Process Scheduling
The act of determining which process in the ready state should be moved to the running state is
known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to deliver
minimum response time for all programs. For achieving this, the scheduler must apply appropriate
rules for swapping processes IN and OUT of CPU.
Schedulers fell into one of the two general categories :
Non pre-emptive scheduling. When the currently executing process gives up the CPU
voluntarily.
Pre-emptive scheduling. When the operating system decides to favour another process, pre-
empting the currently executing process.
Scheduling Queues
All processes when enters into the system are stored in the job queue.
Processes in the Ready state are placed in the ready queue.
Processes waiting for a device to become available are placed in device queues. There are
unique device queues for each I/O device available.
Types of Schedulers
There are three types of schedulers available :
1. Long Term Scheduler :
Long term scheduler runs less frequently. Long Term Schedulers decide which program must
get into the job queue. From the job queue, the Job Processor, selects processes and loads
them into the memory for execution. Primary aim of the Job Scheduler is to maintain a good
degree of Multiprogramming. An optimal degree of Multiprogramming means the average rate of
2. process creation is equal to the average departure rate of processes from the execution
memory.
2. Short Term Scheduler :
This is also known as CPU Scheduler and runs very frequently. The primary aim of this
scheduler is to enhance CPU performance and increase process execution rate.
3. Medium Term Scheduler :
During extra load, this scheduler picks out big processes from the ready queue for some time, to
allow smaller processes to execute, thereby reducing the number of processes in the ready
queue.
Operations on Process
Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other processes.
The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent PID
(PPID) is also stored for each process.
On a typical UNIX systems the process scheduler is termed as sched, and is given PID 0. The first
thing done by it at system start-up time is to launch init, which gives that process PID 1. Further Init
launches all the system daemons and user logins, and becomes the ultimate parent of all other
processes.
3. A child process may receive some amount of shared resources with its parent depending on system
implementation. To prevent runaway children from consuming all of a certain system resource, child
processes may or may not be limited to a subset of the resources originally allocated to the parent.
There are two options for the parent process after creating the child :
Wait for the child process to terminate before proceeding. Parent process makes
a wait() system call, for either a specific child process or for any particular child process, which
causes the parent process to block until the wait() returns. UNIX shells normally wait for their
children to complete before issuing a new prompt.
Run concurrently with the child, continuing to process without waiting. When a UNIX shell runs a
process as a background task, this is the operation seen. It is also possible for the parent to run
for a while, and then wait for the child later, which might occur in a sort of a parallel processing
operation.
Process Termination
4. By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero on
successful completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including :
The inability of the system to deliver the necessary system resources.
In response to a KILL command or other unhandled process interrupts.
A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of
having a child terminates.
If the parent exits, the system may or may not allow the child to continue without a parent (In
UNIX systems, orphaned processes are generally inherited by init, which then proceeds to kill
them.)
When a process ends, all of its system resources are freed up, open files flushed and closed, etc.
The process termination status and execution times are returned to the parent if the parent is waiting
for the child to terminate, or eventually returned to init if the process already became an orphan.
The processes which are trying to terminate but cannot do so because their parent is not waiting for
them are termed zombies. These are eventually inherited by init as orphans and killed off.
CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the execution of
another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby
making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.
Scheduling Criteria
There are many different criteria’s to check when considering the "best" scheduling algorithm :
CPU utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time (Ideally 100% of the time). Considering a real system, CPU usage should range from
40% (lightly loaded) to 90% (heavily loaded.)
5. Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Turnaround time
It is the amount of time taken to execute a particular process, i.e. the interval from time of
submission of the process to the time of completion of the process (Wall clock time).
Waiting time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load average
It is the average number of processes residing in the ready queue waiting for their turn to get
into the CPU.
Response time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.
Scheduling Algorithms
We'll discuss four major scheduling algorithms here which are following :
6. 1. First Come First Serve(FCFS) Scheduling
2. Shortest-Job-First(SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling
5. Multilevel Queue Scheduling
First Come First Serve(FCFS) Scheduling
Jobs are executed on first come, first serve basis.
Easy to understand and implement.
Poor in performance as average wait time is high.
Shortest-Job-First(SJF) Scheduling
7. Best approach to minimize waiting time.
Actual time taken by the process is already known to processor.
Impossible to implement.
In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as a
process with short burst time arrives, the existing process is preemptied.
8. Priority Scheduling
Priority is assigned for each process.
Process with highest priority is executed first and so on.
Processes with same priority are executed in FCFS manner.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
9. Round Robin(RR) Scheduling
A fixed time is allotted to each process, called quantum, for execution.
Once a process is executed for given time period that process is preemptied and other process
executes for given time period.
Context switching is used to save states of preemptied processes.
10. Multilevel Queue Scheduling
Multiple queues are maintained for processes.
Each queue can have its own scheduling algorithms.
Priorities are assigned to each queue.