227 122 RTOS - Module 2
227 122 RTOS - Module 2
227 122 RTOS - Module 2
MODULE II
PREPARED BY
DIVYA HARIKUMAR
ASST. PROFESSOR
DEPT. OF ECE, SCTCE
2 CONTENTS
There are several different criteria to consider when trying to select the "best“
scheduling algorithm for a particular situation and environment, including:
CPU utilization - Ideally the CPU would be busy 100% of the time, so
as to waste 0 CPU cycles. On a real system CPU usage should range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
Throughput - Number of processes completed per unit time.
Turnaround time - Time required for a particular process to complete, from
submission time to completion.
Waiting time - How much time processes spend in the ready queue waiting their
turn to get on the CPU.
Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command
18
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival
time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
19 Process Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed
so that once a process enters the running state, it cannot be preempted until it completes its allotted
time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low
priority running process anytime when a high priority process enters into a ready state.
20
21 First Come First Serve (FCFS)
Advantages-
It is simple and easy to understand.
It can be easily implemented using queue data structure.
It does not lead to starvation.
Disadvantages-
It does not consider the priority or burst time of the processes.
It suffers from convoy effect i.e. processes with higher burst time arrived before
the processes with smaller burst time.
23
Disadvantages-
It cannot be implemented practically since burst time of the processes can not
be known in advance.
It leads to starvation for processes with larger burst time.
Priorities can not be set for the processes.
Processes with larger burst time have poor response time.
34
35 Since the CPU scheduling policy is SJF non-preemptive
36
37
38
39
40
41
The Processes are scheduled according to the priority number assigned to them.
Once the process gets scheduled, it will run till the completion.
Generally, the lower the priority number, the higher is the priority of the process.
47
GANTT CHART
From the GANTT Chart prepared, we can determine the completion time of every
process. The turnaround time, waiting time and response time will be determined.
49
50 Preemptive Priority Scheduling
At the time of arrival of a process in the ready queue, its Priority is compared with
the priority of the other processes present in the ready queue as well as with the one
which is being executed by the CPU at that point of time.
The One with the highest priority among all the available processes will be given the
CPU next.
The difference between preemptive priority scheduling and non preemptive priority
scheduling is that, in the preemptive priority scheduling, the job which is being
executed can be stopped at the arrival of a higher priority job.
51
Meanwhile the execution of P5, all the processes got available in the ready queue.
At this point, the algorithm will start behaving as Non Preemptive Priority
Scheduling. Hence now, once all the processes get available in the ready queue, the OS
just took the process with the highest priority and execute that process till completion.
In this case, P4 will be scheduled and will be executed till the completion.
Since P4 is completed, the other process with the highest priority available in the
ready queue is P2. Hence P2 will be scheduled next.
P2 is given the CPU till the completion. Since its remaining burst time is 6 units hence
P7 will be scheduled after this.
The only remaining process is P6 with the least priority, the Operating System has no
choice unless of executing it. This will be executed at the last.
54
55
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
56
following conditions-
The arrival time of all the processes is same
All the processes become available
The waiting time for the process having the highest priority will always be zero in preemptive
mode.
The waiting time for the process having the highest priority may not be zero in non preemptive
mode.
57 Round Robin Scheduling
This scheduler allocates the CPU to each process in the ready queue for a
time interval of up to 1 time quantum in FIFO (circular) fashion.
If the process still running at the end of the quantum; it will be preempted
from the CPU. A context switch will be executed, and the process will be put
at the tail of the ready queue.
Then the scheduler will select the next process in the ready queue.
The ready queue is treated as a circular queue.
59
Disadvantages-
It leads to starvation for processes with larger burst time as they have to repeat the cycle many
times. The higher the time quantum, the higher the response time in the system.
Its performance heavily depends on time quantum.
Priorities can not be set for the processes.
The lower the time quantum, the higher the context switching overhead in the system.
Deciding a perfect time quantum is really a very difficult task in the system.
With decreasing value of time quantum,
Number of context switch increases
61 Response time decreases
Chances of starvation decreases
Thus, smaller value of time quantum is better in terms of response time.
If time quantum is very small, RR approach is called processor sharing and appears to
the users as though each of n process has its own processor running at 1/n the speed of
real processor.
If the Time –slice is chosen to be very small (closer to the context switching period)
then context switching overheads will be very high, thus affecting the system
throughput adversely.
With increasing value of time quantum,
Number of context switch decreases
Response time increases
Chances of starvation increases
Thus, higher value of time quantum is better in terms of number of context switch.
With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
The performance of Round Robin scheduling heavily depends on the value of time
quantum.
Time -slice has to be carefully chosen .It should be small enough to give a good
response to the interactive users.
At the same time it should be large enough to keep the context –switching overhead
low.
63
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average
waiting time and average turnaround time.
66
67
Multilevel Queue Scheduling
A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues.
The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
There must be scheduling between the queues, which is commonly implemented
as a fixed priority preemptive scheduling.
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty.
If an interactive editing process entered the ready queue while a batch process was
running, the batch process will be pre-empted.
70
Multilevel Feedback Queue Scheduling
Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
Opening a new browser (say Chrome, etc) is an example of creating a process. At this point, a
new process will start to execute. On the contrary, opening multiple tabs in the browser is an
example of creating the thread.
74 Many software packages that run on modern desktop PCs are multithreaded.
An application typically is implemented as a separate process with several
threads of control.
Example:
A web browser might have one thread display images or text while another thread retrieves
data from the network.
A word processor may have a thread for displaying graphics, another thread
for responding to keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background.
Need of Thread:
It takes far less time to create a new thread in an existing process than to create a new
process.
Threads can share the common data, they do not need to use Inter- Process communication.
Context switching is faster when working with threads.
an a process.
75 Threads
A thread is a basic unit of CPU utilization.
It comprises a thread ID, a program counter, a register set, and a stack.
Thread shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
A traditional (or heavyweight) process has a single thread of control.
If a process has multiple threads of control, it can perform more than one task at a time.
There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records and
control blocks. Thread is often referred to as a lightweight process.
Thread is a mechanism to provide multiple execution control to the processes.
76 Single and Multithreaded Processes
Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space. Note: stack and
registers can’t be shared among the threads. Each thread has its own stack and registers.
78 Benefits of thread over process
Economy: Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is more
economical to create and context-switch threads. In general it is much more time
consuming to create and manage processes than threads. In Solaris, for example,
creating a process is about thirty times slower than
is creating a thread, and context switching is about five times slower.
Faster context switch: Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from
the CPU.
Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow some
specific communication technique for communication between two process.
Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
Threads are classified as:
79 user threads and
kernel threads
User-level thread
Thread management done by user level thread library rather than via systems call.
Therefore , thread switching does not need to call operating system and to cause interrupt
to the kernel.
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as if
they are single-threaded processes.
The User-level threads are small and faster as compared to kernel-level threads.
These threads are represented by registers, the program counter (PC), stack, and some
small process control. Furthermore, there is no kernel interaction in user-level thread
synchronization.
It is also known as the many-to-one mapping thread, as the OS assigns every thread in a
multithreaded program to an execution context. Every multithreaded process is treated as a
single execution unit by the OS.
Advantages of User-level threads
80
The user threads can be easily implemented than the kernel thread.
User-level threads can be applied to such types of operating systems
that do not support threads at the kernel-level.
It is faster and efficient.
Context switch time is shorter than the kernel-level threads.
It does not require modifications of the operating system.
These are more portable and these threads may be run on any OS.
Simple Representation: User-level threads representation is very
simple. The register, PC, stack, and mini thread control blocks are
stored in the address space of the user-level process.
Simple management: It is simple to create, switch, and synchronize
threads without the intervention of the kernel.
Disadvantages of User-level threads
81
User-level threads lack coordination between the thread and the kernel. Therefore,
process as whole gets one time slice irrespective of whether process has one
thread or even 1000 threads within it. It is up to each thread to relinquish control
to other threads.
User-level threads require non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will be blocked in the kernel, even if there are runable
threads left in the processes. For example, if one thread causes a page fault, the
process gets blocked.
User-level threads don't support system-wide scheduling priorities.
It is not appropriate for a multiprocessor system.
Many-to-One
One-to-One
Many-to-Many
87 Many-to-One
Examples:
Solaris Green Threads
GNU Portable Threads
PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE
88 One-to-One
Each user-level thread maps to kernel thread.
It provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a blocking
system call.
It also allows multiple threads to run in parallel on multiprocessors.
The only drawback to this model is that creating a user thread
requires creating the corresponding kernel thread.
Because the overhead of creating kernel threads can burden the
performance of an application, most implementations of this model
restrict the number of threads supported by the system.
Examples
Windows NT/XP/2000
Linux
Solaris 9 and later
89 Many-to-Many Model
TCB
The components have been defined below:
Thread ID: It is a unique identifier assigned by the Operating System to the
thread when it is being created.
Thread states: These are the states of the thread which changes as the thread
progresses through the system
CPU information: It includes everything that the OS needs to know about, such
as how far the thread has progressed and what data is being used.
Thread Priority: It indicates the weight (or priority) of the thread over other
threads which helps the thread scheduler to determine which thread should be
selected next from the READY queue.
A pointer which points to the process which triggered the creation of this
thread.
A pointer which points to the thread(s) created by this thread.
92 Multiple Processors Scheduling in Operating
System
Multiple processor scheduling or multiprocessor scheduling focuses on
designing the system's scheduling function, which consists of more than one
processor.
Multiple CPUs share the load in multiprocessor scheduling so that various
processes run simultaneously.
Multiprocessor scheduling is complex as compared to single processor
scheduling.
In the multiprocessor scheduling, there are many processors, and they are
identical, and we can run any process at any time.
The multiple CPUs in the system are in close communication, which shares a
common bus, memory, and other peripheral devices. So the system is tightly
coupled.
These systems are used when we want to process a bulk amount of data, and
these systems are mainly used in satellite, weather forecasting, etc.
Draw the Gantt chart and calculate the average waiting time and
turn-around time for these processes if time quantum is 2 units
97 Reference
https://www.javatpoint.com
https://www.javatpoint.com/multiple-processors-scheduling-in-operating-
system
https://www.javatpoint.com/user-level-vs-kernel-level-threads-in-operating-
system
Textbook: Abraham Silberschatz- ‘Operating System Principles’: Wiley
India,7th edition, 2011