Module 5
Module 5
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Module – 5
Operating System
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
An operating system (OS) is a software, consisting of programs and data, that runs on computers and
manages the computer hardware and provides common services for efficient execution of various
application software.
User Applications
Application Programming
Interface (API)
Memory Management
Kernel Services
Process Management
Time Management
The Kernel
The kernel is the core of the operating system and is responsible for managing the system resources
and the communication among the hardware and other system services.
Kernel acts as the abstraction layer between system resources and user applications.
Kernel contains a set of system libraries and services.
Process management
Process Management includes setting up the memory space for the process, loading the process’s
code into the memory space, allocating system resources, scheduling and managing the execution
of the process, setting up and managing the Process Control Block (PCB), Inter Process
Communication and synchronisation, process termination/ deletion, etc.
Kernel is responsible for routing the I/O requests coming from different user applications to the
appropriate I/O devices of the system.
In a well-structured OS, the direct accessing of I/O devices are not allowed and the access to them are
provided through a set of Application Programming Interfaces (APIs) exposed by the kernel.
The kernel maintains a list of all the I/O devices of the system.
This list may be available in advance, at the time of building the kernel. Some kernels, dynamically
updates the list of available devices as and when a new device is installed (e.g. Windows NT kernel
keeps the list updated when a new plug ‘n’ play USB device is attached to the system).
Device Manager is responsible for
o Loading and unloading of device drivers
o Exchanging information and the system specific control signals to and from the device
Secondary memory is used as backup medium for programs and data since the main memory is
volatile.
In most of the systems, the secondary storage is kept in disks (Hard Disk). The secondary storage
management service of kernel deals with
o Disk storage allocation
o Disk scheduling (Time interval at which the disk is activated to backup data)
o Free Disk space management
Protection Systems
• Most of the modern operating systems are designed in such a way to support multiple users
with different levels of access permissions (e.g. Windows 10 with user permissions like
‘Administrator’, ‘Standard’, ‘Restricted’, etc.).
• Protection deals with implementing the security policies to restrict the access to both user and
system resources by different applications or processes or users. In multiuser supported
operating systems, one user may not be allowed to view or modify the whole/portions of
another user’s data or profile details.
• In addition, some application may not be granted with permission to make use of some of the
system resources. This kind of protection is provided by the protection services running within
the kernel.
Interrupt Handler system.
• Kernel provides handler mechanism for all external/internal interrupts generated by
the Interrupts can be either Synchronous or Asynchronous.
• Interrupts which occurs in sync with the currently executing task is known as Synchronous
interrupts.
• Usually the software interrupts fall under the Synchronous Interrupt category. Divide by zero,
memory segmentation error, etc. are examples of synchronous interrupts.
For synchronous interrupts, the interrupt handler runs in the same context of the interrupting
task.
Asynchronous interrupts are interrupts, which occurs at any point of execution of any task,
and are not in sync with the currently executing task.
• The major drawback of monolithic kernel is that any error or failure in any one of the
kernel modules leads to the crashing of the entire kernel application.
Microkernel
• The microkernel design incorporates only the essential set of Operating System services into
the kernel.
• The rest of the Operating System services are implemented in programs known as ‘Servers’
which runs in user space.
• Memory management, process management, timer systems and interrupt handlers are the
essential services, which forms the part of the microkernel.
• Microkernel based design approach offers the following benefits
Robustness: If a problem is encountered in any of the services, which runs as ‘Server’
application, the same can be reconfigured and re-started without the need for re-starting the
entire OS are ideally zero.
Configurability: Any services, which run as ‘Server’ application can be changed without the
need to restart the whole system. This makes the system dynamically configurable.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Personal Computer/Desktop system is a typical example for a system where GPOSs are
deployed.
Windows XP/MS-DOS etc are examples of General Purpose Operating System
Real Time Purpose Operating System (RTOS)
Operating Systems, which are deployed in embedded systems demanding real-time response
Deterministic in execution behavior. Consumes only known amount of time for kernel
applications
Implements scheduling policies for executing the highest priority task/application always
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
From a multitasking point of view, executing multiple tasks is like a single book being read by
multiple people, at a time only one person can read it and then take turns to read it.
Different bookmarks may be used to help a reader identify where to resume reading next time.
An Operating System decides which task to execute in case there are multiple tasks to be
executed. The operating system maintains information about every task and information about
the state of each task.
The information about a task is recorded in a data structure called the task context. When a
task is executing, it uses the processor and the registers available for all sorts of processing.
When a task leaves the processor for another task to execute before it has finished its own, it
should resume at a later time from where it stopped and not from the first instruction. This
requires the information about the task with respect to the registers of the processor to be
stored somewhere. This information is recorded in the task context.
Task States
• In an operation system there are always multiple tasks. At a time only one task can be
executed. This means that there are other tasks which are waiting their turn to be
executed.
• Depending upon execution or not a task may be classified into the following three
states:
• Running state - Only one task can actually be using the processor at a given time that
task is said to be the “running” task and its state is “running state”. No other task can
be in that same state at the same time
• Ready state - Tasks that are not currently using the processor but are ready to run are
in the “ready” state. There may be a queue of tasks in the ready state.
• Waiting state - Tasks that are neither in running nor ready state but that are waiting for
some event external to themselves to occur before the can go for execution on are in
the “waiting” state.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Process Concept:
Process: A process or task is an instance of a program in execution. The execution of a process must
program in a sequential manner. At any time at most one instruction is executed. The process includes
the current activity as represented by the value of the program counter and the content of the
processors registers. Also it includes the process stack which contain temporary data (such as method
parameters return address and local variables) & a data section which contain global variables.
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Process state: As a process executes, it changes state. The state of a process is defined by the correct
activity of that process. Each process may be in one of the following states.
Many processes may be in ready and waiting state at the same time.
Process scheduling:
Scheduling is a fundamental function of OS. When a computer is multiprogrammed, it has multiple
processes completing for the CPU at the same time. If only one CPU is available, then a choice has to
be made regarding which process to execute next. This decision making process is known as scheduling
and the part of the OS that makes this choice is called a scheduler. The algorithm it uses in making this
choice is called scheduling algorithm.
Scheduling queues: As processes enter the system, they are put into a job queue. This queue consists
of all process in the system. The process that are residing in main memory and are ready & waiting to
execute or kept on a list called ready queue.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
A process control block contains many pieces of information associated with a specific process. It
includes the following informations.
Process state: The state may be new, ready, running, waiting or terminated state.
Program counter: it indicates the address of the next instruction to be executed for this purpose.
CPU registers: The registers vary in number & type depending on the computer architecture.
It includes accumulators, index registers, stack pointer & general purpose registers, plus any
condition- code information must be saved when an interrupt occurs to allow the process to
be continued correctly after- ward.
CPU scheduling information: This information includes process priority pointers to scheduling
queues & any other scheduling parameters.
Memory management information: This information may include such information as the value
of the bar & limit registers, the page tables or the segment tables, depending upon the memory
system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used,
time limits, account number, job or process numbers and so on.
I/O Status Information: This information includes the list of I/O devices allocated to this
process, a list of open files and so on. The PCB simply serves as the repository for any
information that may vary from process to process
Threads
Applications use concurrent processes to speed up their operation. However, switching between
processes within an application incurs high process switching overhead because the size of the process
state information is large, so operating system designers developed an alternative model of execution
of a program, called a thread, that could provide concurrency within an application with less overhead
To understand the notion of threads, let us analyze process switching overhead and see where a saving
can be made. Process switching overhead has two components:
• Execution related overhead: The CPU state of the running process has to be saved and the CPU state
of the new process has to be loaded in the CPU. This overhead is unavoidable.
• Resource-use related overhead: The process context also has to be switched. It involves switching of
the information about resources allocated to the process, such as memory and files, and interaction of
the process with other processes. The large size of this information adds to the process switching
overhead.
Consider child processes Pi and Pj of the primary process of an application. These processes inherit
the context of their parent process. If none of these processes have allocated any resources of their
own, their context is identical; their state information differs only in their CPU states and contents of
their stacks. Consequently, while switching between Pi and Pj ,much of the saving and loading of
process state information is redundant. Threads exploit this feature to reduce the switching overhead.
A process creates a thread through a system call. The thread does not have resources of its own, so it
does not have a context; it operates by using the context of the process, and accesses the resources of
the process through it. We use the phrases ―thread(s) of a process‖ and ―parent process of a thread‖
to describe the relationship between a thread and the process whose context it uses.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
POSIX Threads:
POSIX Threads, usually referred to as pthreads, is an execution model that exists independently from
a language, as well as a parallel execution model. It allows a program to control multiple different flows
of work that overlap in time. Each flow of work is referred to as a thread, and creation and control
over these flows is achieved by making calls to the POSIX Threads API. POSIX Threads is an API
defined by the standard POSIX.1c, Threads extensions (IEEE Std 1003.1c-1995).
Implementations of the API are available on many Unix-like POSIX-conformant operating systems
such as FreeBSD, NetBSD, OpenBSD, Linux, Mac OS X, Android[1] and Solaris, typically bundled as
a library libpthread. DR-DOS and Microsoft Windows implementations also exist: within the
SFU/SUA subsystem which provides a native implementation of a number of POSIX APIs, and also
within third-party packages such as pthreads-w32,[2] which implements pthreads on top of existing
Windows API.
Win32 Threads:
Win32 threads are the threads supported by various flavors of Windows Operating Systems.
TheWin32Application Programming Interface (Win32API) libraries provide the standard set of Win
32 thread creation and management functions.
Pre-emptive Scheduling
Employed in systems, which implements preemptive multitasking
When and how often each process gets a chance to execute (gets the CPU time) is dependent
on the type of preemptive scheduling algorithm used for scheduling the processes
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
The scheduler can preempt (stop temporarily) the currently executing task/process and select
another task from the ‘Ready’ queue for execution
When to pre-empt a task and which task is to be picked up from the ‘Ready’ queue for execution
after preempting the current task is purely dependent on the scheduling algorithm
The act of moving a ‘Running’ process into the ‘Ready’ queue by the scheduler, without the
process requesting for it is known as ‘Preemption’
Time-based preemption and priority-based preemption are the two important approaches
adopted in preemptive scheduling
Scheduling algorithm
The kernel service/application, which implements the scheduling algorithm, is known as ‘Scheduler’. The
process scheduling decision may take place when a process switches its state to
1. ‘Ready’ state from ‘Running’ state
2. ‘Blocked/Wait’ state from ‘Running’ state
3. ‘Ready’ state from ‘Blocked/Wait’ state
4. ‘Completed’ state
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Non-preemptive Scheduling
1)First-Come-First-Served (FCFS)/ FIFO Scheduling:
EXAMPLE-1:
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together in the order P1, P2, P3. Calculate the waiting time and Turn
Around Time (TAT) for each process and the average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes).
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (0+10+15)/3 = 25/3
= 8.33 milliseconds
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (10+15+22)/3 = 47/3
= 15.66 milliseconds
Average Turn Around Time (TAT) is the sum of average waiting time and average execution time.
Average Execution Time = (Execution time for all processes)/No. of processes
= (Execution time for (P1+P2+P3))/3
= (10+5+7)/3 = 22/3
= 7.33
Average Turn Around Time = Average waiting time + Average execution time
= 8.33 + 7.33
= 15.66 milliseconds
EXAMPLE- 2:
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes) for the above example if the
process enters the ‘Ready’ queue together in the order P2, P1, P3.
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Assuming the CPU is readily available at the time of arrival of P2, P2 starts executing without any waiting
in the ‘Ready’ queue. Hence the waiting time for P2 is zero. The waiting time for all processes is given as
Waiting Time for P2 = 0 ms (P2 starts executing fi rst)
Waiting Time for P1 = 5 ms (P1 starts executing after completing P2)
Waiting Time for P3 = 15 ms (P3 starts executing after completing P2 and P1)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P2+P1+P3)) / 3
= (0+5+15)/3 = 20/3
= 6.66 milliseconds
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P2+P1+P3)) / 3
= (5+15+22)/3 = 42/3
= 14 milliseconds
2)The Last-Come-First Served (LCFS) scheduling algorithm also allocates CPU time to the processes
based on the order in which they are entered in the ‘Ready’ queue. The last entered process is serviced first.
LCFS scheduling is also known as Last In First Out ( LIFO) where the process, which is put last into the
‘Ready’ queue, is serviced first.
EXAMPLE:
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together in the order P1, P2, P3 (Assume only P1 is present in the
‘Ready’ queue when the scheduler picks it up and P2, P3 entered ‘Ready’ queue after that). Now a new
process P4 with estimated completion time 6 ms enters the ‘Ready’ queue after 5 ms of scheduling P1.
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes).
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue + Execution Time = (Execution
Start Time – Arrival Time) + Estimated Execution Time = (10 – 5) + 6 = 5 + 6)
LCFS scheduling is not optimal and it also possesses the same drawback as that of FCFS algorithm.
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4
= 18 milliseconds
LCFS scheduling is not optimal and it also possesses the same drawback as that of FCFS algorithm.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Three processes with process IDs P1 P2 P3 with estimated completion time 10 5 7 milliseconds
respectively enters the ready queue together A new process P4 with an estimated completion time of
2 ms enters the queue after 2 ms
At the beginning, there are only three processes (P1 P2 and P3 available in the ready queue and the
SRT scheduler picks up the process with the shortest remaining time for execution completion (In this
example P2 with remaining time 5 ms) for scheduling
Now process P4 with estimated execution completion time 2 ms enters the Ready queue after 2 ms of
start of execution of P2 .The processes are re scheduled for execution in the following order
WT for P2 = 0 ms + 2 ms = 2ms (P2 starts executing first and is interrupted by P4 and has to wait till
the completion of P4 to get the next CPU slot)
WT for P4 = 0 ms (P4 starts executing by pre-empting P2 since the execution time for completion of
P4 (2ms) is less than that of the Remaining time for execution completion of P2 (3ms here))
TAT for P2 = 7 ms
= (2-2) + 2= 2 ms
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Preemptive Scheduling
Preemptive scheduling is employed in systems, which implements preemptive multitasking model. In
preemptive scheduling, every task in the ‘Ready’ queue gets a chance to execute. When and how often each
process gets a chance to execute (gets the CPU time) is dependent on the type of preemptive scheduling
algorithm used for scheduling the processes.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
This type of algorithm is designed only for the time sharing system. It is similar to FCFS scheduling
with preemption condition to switch between processes. A small unit of time called quantum time or
time slice is used to switch between the processes. The average waiting time under the round robin
policy is quiet long.
Three processes P 1 P 2 P 3 with estimated completion times of 6 4 2 ms respectively, enters the ready
queue together in the order P 1 P 2 P 3 Calculate the WT, AWT, TAT ATAT in RR algorithm with
Time slice= 2 ms
The scheduler sorts the Ready queue based on the FCFS policy and picks up P 1 from the ‘queue and
executes it for the time slice 2 ms
When the time slice is expired, P 1 is preempted and P 2 is scheduled for execution The Time slice
expires after 2 ms of execution of P 2 Now P 2 is preempted and P 3 is picked up for execution. P3
completes its execution within the time slice and the scheduler picks P1 again for execution for the
next time slice This procedure is repeated till all the processes are serviced
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
WT for P1 = 0 + (6-2) + (10-8) = 6ms (P1 starts executing first and waits for two time slices to get
execution back and again 1 time slice for getting CPU time)
WT for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1 executes for 1 time slice and
waits for two time slices to get the CPU time)
WT for P3 = (4-0) = 4ms (P3 starts executing after completing the first time slices for P1 & P2 and
completes its execution in a single time slice.)
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
Priority based preemptive scheduling gives Real-Time attention to high priority tasks. Thus priority
based preemptive scheduling is adopted in systems which demands ‘Real-Time’ behaviour. Most of the
RTOSs make use of the preemptive priority based scheduling algorithm for process scheduling. Preemptive
priority based scheduling also possesses the same drawback of non-preemptive priority based scheduling–
‘ Starvation’. This can be eliminated by the ‘ Aging’ technique. Refer the section Non-preemptive priority
based scheduling for more details on ‘Starvation’ and ‘Aging’.
Process Synchronization
A co-operation process is one that can affect or be affected by other processes executing in the system.
Co-operating process may either directly share a logical address space or be allotted to the shared data
only through files. This concurrent access is known as Process synchronization.
Deadlock:
In a multiprogramming environment several processes may compete for a finite number of resources.
A process request resources; if the resource is available at that time a process enters the wait state.
Waiting process may never change its state because the resources requested are held by other waiting
process. This situation is known as deadlock.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System
1) What is OS? Explain The Operating System Architecture with a neat diagram.
2) Compare GPOS and RTOS
3) Define kernel.Give the difference between the types of kernel with a neat diagram.
4) Mention the different functions of the real time kernel and explain briefly.
5) What is a process? Explain the structure the process with a neat diagram.
6) What is a PCB? Mention the memory organisation of a process/thread.
7) Explain the concept of multithreading with a neat diagram.
8) Compare process and thread.
9) What is context switching ? Explain it with a neat diagram.
10) What is a multitasking /multiprocessing? Illustrate with problem examples all types of multitasking.
D E P T . O F CSE | AJIET