Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
36 views

Module 5

Electronics for vtu

Uploaded by

raaahaul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Module 5

Electronics for vtu

Uploaded by

raaahaul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

INTRODUCTION TO E M B E D D E D S Y S T E M S |

BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Module – 5

Operating System

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

RTOS & IDE FOR ESD

An operating system (OS) is a software, consisting of programs and data, that runs on computers and
manages the computer hardware and provides common services for efficient execution of various
application software.

Fig 5.1 OS layer interactions


Operating System Basics
 The Operating System acts as a bridge between the user applications/tasks and the underlying
system resources through a set of system functionalities and services
 OS manages the system resources and makes them available to the user applications/tasks on
a need basis
 The primary functions of an Operating system are to

– Make the system convenient to use

– Organize and manage the system resources efficiently and correctly

User Applications
Application Programming
Interface (API)
Memory Management
Kernel Services

Process Management

Time Management

File System Management


I/O System Management
Device Driver
Interface
Underlying Hardware

Fig 5.2 OS Layers and Kernel Services


D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

The Kernel
 The kernel is the core of the operating system and is responsible for managing the system resources
and the communication among the hardware and other system services.
 Kernel acts as the abstraction layer between system resources and user applications.
 Kernel contains a set of system libraries and services.

Process management
 Process Management includes setting up the memory space for the process, loading the process’s
code into the memory space, allocating system resources, scheduling and managing the execution
of the process, setting up and managing the Process Control Block (PCB), Inter Process
Communication and synchronisation, process termination/ deletion, etc.

Primary Memory Management


 The term primary memory refers to the volatile memory (RAM) where processes are loaded and
variables and shared data associated with each process are stored.
 The Memory Management Unit (MMU) of the kernel is responsible for
o Keeping track of which part of the memory area is currently used by which process
o Allocating and De-allocating memory space on a need basis (Dynamic memory
allocation).

File System Management


File is a collection of related information.
A file could be a program (source code or executable), text files, image files, word documents,
audio/video files, etc.
Each of these files differ in the kind of information they hold and the way in which the information is
stored. The file operation is a useful service provided by the OS.
The file system management service of Kernel is responsible for
o The creation, deletion and alteration of files
o Creation, deletion and alteration of directories
o Saving of files in the secondary storage memory (e.g. Hard disk storage)
o Providing automatic allocation of file space based on the amount of free space available
o Providing a flexible naming convention for the files

I/O System (Device) Management

 Kernel is responsible for routing the I/O requests coming from different user applications to the
appropriate I/O devices of the system.
 In a well-structured OS, the direct accessing of I/O devices are not allowed and the access to them are
provided through a set of Application Programming Interfaces (APIs) exposed by the kernel.
 The kernel maintains a list of all the I/O devices of the system.
 This list may be available in advance, at the time of building the kernel. Some kernels, dynamically
updates the list of available devices as and when a new device is installed (e.g. Windows NT kernel
keeps the list updated when a new plug ‘n’ play USB device is attached to the system).
 Device Manager is responsible for
o Loading and unloading of device drivers
o Exchanging information and the system specific control signals to and from the device

Secondary Storage Management


 The secondary storage management deals with managing the secondary storage memory devices, if
any, connected to the system.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

 Secondary memory is used as backup medium for programs and data since the main memory is
volatile.
 In most of the systems, the secondary storage is kept in disks (Hard Disk). The secondary storage
management service of kernel deals with
o Disk storage allocation
o Disk scheduling (Time interval at which the disk is activated to backup data)
o Free Disk space management

Protection Systems
• Most of the modern operating systems are designed in such a way to support multiple users
with different levels of access permissions (e.g. Windows 10 with user permissions like
‘Administrator’, ‘Standard’, ‘Restricted’, etc.).
• Protection deals with implementing the security policies to restrict the access to both user and
system resources by different applications or processes or users. In multiuser supported
operating systems, one user may not be allowed to view or modify the whole/portions of
another user’s data or profile details.
• In addition, some application may not be granted with permission to make use of some of the
system resources. This kind of protection is provided by the protection services running within
the kernel.
Interrupt Handler system.
• Kernel provides handler mechanism for all external/internal interrupts generated by
the Interrupts can be either Synchronous or Asynchronous.
• Interrupts which occurs in sync with the currently executing task is known as Synchronous
interrupts.
• Usually the software interrupts fall under the Synchronous Interrupt category. Divide by zero,
memory segmentation error, etc. are examples of synchronous interrupts.
For synchronous interrupts, the interrupt handler runs in the same context of the interrupting
task.
Asynchronous interrupts are interrupts, which occurs at any point of execution of any task,
and are not in sync with the currently executing task.

Kernel Space and User Space


• The applications/services are classified into two categories, namely: user applications and
kernel applications.
• The program code corresponding to the kernel applications/services are kept in a contiguous
area (OS dependent) of primary (working) memory and is protected from the unauthorized
access by user programs/applications.
• The memory space at which the kernel code is located is known as ‘ Kernel Space’
• Similarly, all user applications are loaded to a specific area of primary memory and this
memory area is referred as ‘ User Space’.
• User space is the memory area where user applications are loaded and executed.
• The partitioning of memory into kernel and user space is purely Operating System dependent.
D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Monolithic Kernel and Microkernel


As we know, the kernel forms the heart of an operating system. Different approaches are adopted for building
an Operating System kernel. Based on the kernel design, kernels can be classified into ‘Monolithic’ and
‘Micro’.
Monolithic Kernel
• In monolithic kernel architecture, all kernel services run in the kernel space. Here all
kernel modules run within the same memory space under a single kernel thread.

• The major drawback of monolithic kernel is that any error or failure in any one of the
kernel modules leads to the crashing of the entire kernel application.

Microkernel
• The microkernel design incorporates only the essential set of Operating System services into
the kernel.
• The rest of the Operating System services are implemented in programs known as ‘Servers’
which runs in user space.
• Memory management, process management, timer systems and interrupt handlers are the
essential services, which forms the part of the microkernel.
• Microkernel based design approach offers the following benefits
 Robustness: If a problem is encountered in any of the services, which runs as ‘Server’
application, the same can be reconfigured and re-started without the need for re-starting the
entire OS are ideally zero.
 Configurability: Any services, which run as ‘Server’ application can be changed without the
need to restart the whole system. This makes the system dynamically configurable.

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Types of Operating Systems


• Depending on the type of kernel and kernel services, purpose and type of computing
systems where the OS is deployed and the responsiveness to applications, Operating
Systems are classified into
General Purpose Operating System (GPOS)

Real Time Purpose Operating System (RTOS)


General Purpose Operating System (GPOS)

 Operating Systems, which are deployed in general computing systems


 The kernel is more generalized and contains all the required services to execute generic
applications
 Need not be deterministic in execution behavior
 May inject random delays into application software and thus cause slow responsiveness of an
application at unexpected times

 Personal Computer/Desktop system is a typical example for a system where GPOSs are
deployed.
 Windows XP/MS-DOS etc are examples of General Purpose Operating System
Real Time Purpose Operating System (RTOS)
 Operating Systems, which are deployed in embedded systems demanding real-time response
 Deterministic in execution behavior. Consumes only known amount of time for kernel
applications
 Implements scheduling policies for executing the highest priority task/application always

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

 Implements policies and rules concerning time-critical allocation of a system’s resources


 Windows CE, QNX, VxWorks MicroC/OS-II etc are examples of Real Time Operating
Systems (RTOS)
Task:
 Task is a piece of code or program that is separate from another task and can be executed
independently of the other tasks.
 In embedded systems, the operating system has to deal with a limited number of tasks
depending on the functionality to be implemented in the embedded system.
 Multiple tasks are not executed at the same time instead they are executed in pseudo parallel
i.e. the tasks execute in turns as the use the processor.

 From a multitasking point of view, executing multiple tasks is like a single book being read by
multiple people, at a time only one person can read it and then take turns to read it.
 Different bookmarks may be used to help a reader identify where to resume reading next time.
 An Operating System decides which task to execute in case there are multiple tasks to be
executed. The operating system maintains information about every task and information about
the state of each task.
 The information about a task is recorded in a data structure called the task context. When a
task is executing, it uses the processor and the registers available for all sorts of processing.
When a task leaves the processor for another task to execute before it has finished its own, it
should resume at a later time from where it stopped and not from the first instruction. This
requires the information about the task with respect to the registers of the processor to be
stored somewhere. This information is recorded in the task context.

Task States
• In an operation system there are always multiple tasks. At a time only one task can be
executed. This means that there are other tasks which are waiting their turn to be
executed.

• Depending upon execution or not a task may be classified into the following three
states:

• Running state - Only one task can actually be using the processor at a given time that
task is said to be the “running” task and its state is “running state”. No other task can
be in that same state at the same time

• Ready state - Tasks that are not currently using the processor but are ready to run are
in the “ready” state. There may be a queue of tasks in the ready state.

• Waiting state - Tasks that are neither in running nor ready state but that are waiting for
some event external to themselves to occur before the can go for execution on are in
the “waiting” state.
AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Fig 5.3 Task States Transition

Process Concept:

Process: A process or task is an instance of a program in execution. The execution of a process must
program in a sequential manner. At any time at most one instruction is executed. The process includes
the current activity as represented by the value of the program counter and the content of the
processors registers. Also it includes the process stack which contain temporary data (such as method
parameters return address and local variables) & a data section which contain global variables.

Difference between process & program:


A program by itself is not a process. A program in execution is known as a process. A program is a
passive entity, such as the contents of a file stored on disk whereas process is an active entity with a
program counter specifying the next instruction to execute and a set of associated resources may be
shared among several process with some scheduling algorithm being used to determinate when the
stop work on one process and service a different one.

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Process state: As a process executes, it changes state. The state of a process is defined by the correct
activity of that process. Each process may be in one of the following states.

 New: The process is being created.


 Ready: The process is waiting to be assigned to a processor.
 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur.
 Terminated: The process has finished execution.

Many processes may be in ready and waiting state at the same time.

Fig 5.4 Process states

Process scheduling:
Scheduling is a fundamental function of OS. When a computer is multiprogrammed, it has multiple
processes completing for the CPU at the same time. If only one CPU is available, then a choice has to
be made regarding which process to execute next. This decision making process is known as scheduling
and the part of the OS that makes this choice is called a scheduler. The algorithm it uses in making this
choice is called scheduling algorithm.

Scheduling queues: As processes enter the system, they are put into a job queue. This queue consists
of all process in the system. The process that are residing in main memory and are ready & waiting to
execute or kept on a list called ready queue.

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Process control block:


Each process is represented in the OS by a process control block. It is also by a process control block.
It is also known as task control block.

A process control block contains many pieces of information associated with a specific process. It
includes the following informations.

 Process state: The state may be new, ready, running, waiting or terminated state.
 Program counter: it indicates the address of the next instruction to be executed for this purpose.
 CPU registers: The registers vary in number & type depending on the computer architecture.
It includes accumulators, index registers, stack pointer & general purpose registers, plus any
condition- code information must be saved when an interrupt occurs to allow the process to
be continued correctly after- ward.
 CPU scheduling information: This information includes process priority pointers to scheduling
queues & any other scheduling parameters.
 Memory management information: This information may include such information as the value
of the bar & limit registers, the page tables or the segment tables, depending upon the memory
system used by the operating system.
 Accounting information: This information includes the amount of CPU and real time used,
time limits, account number, job or process numbers and so on.
 I/O Status Information: This information includes the list of I/O devices allocated to this
process, a list of open files and so on. The PCB simply serves as the repository for any
information that may vary from process to process

Fig 5.5 Process Control Block


D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Threads
Applications use concurrent processes to speed up their operation. However, switching between
processes within an application incurs high process switching overhead because the size of the process
state information is large, so operating system designers developed an alternative model of execution
of a program, called a thread, that could provide concurrency within an application with less overhead

To understand the notion of threads, let us analyze process switching overhead and see where a saving
can be made. Process switching overhead has two components:

• Execution related overhead: The CPU state of the running process has to be saved and the CPU state
of the new process has to be loaded in the CPU. This overhead is unavoidable.

• Resource-use related overhead: The process context also has to be switched. It involves switching of
the information about resources allocated to the process, such as memory and files, and interaction of
the process with other processes. The large size of this information adds to the process switching
overhead.

Consider child processes Pi and Pj of the primary process of an application. These processes inherit
the context of their parent process. If none of these processes have allocated any resources of their
own, their context is identical; their state information differs only in their CPU states and contents of
their stacks. Consequently, while switching between Pi and Pj ,much of the saving and loading of
process state information is redundant. Threads exploit this feature to reduce the switching overhead.

A process creates a thread through a system call. The thread does not have resources of its own, so it
does not have a context; it operates by using the context of the process, and accesses the resources of
the process through it. We use the phrases ―thread(s) of a process‖ and ―parent process of a thread‖
to describe the relationship between a thread and the process whose context it uses.

Fig. 5.6 Process and Thread Structure-Multithread

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

POSIX Threads:

POSIX Threads, usually referred to as pthreads, is an execution model that exists independently from
a language, as well as a parallel execution model. It allows a program to control multiple different flows
of work that overlap in time. Each flow of work is referred to as a thread, and creation and control
over these flows is achieved by making calls to the POSIX Threads API. POSIX Threads is an API
defined by the standard POSIX.1c, Threads extensions (IEEE Std 1003.1c-1995).

Implementations of the API are available on many Unix-like POSIX-conformant operating systems
such as FreeBSD, NetBSD, OpenBSD, Linux, Mac OS X, Android[1] and Solaris, typically bundled as
a library libpthread. DR-DOS and Microsoft Windows implementations also exist: within the
SFU/SUA subsystem which provides a native implementation of a number of POSIX APIs, and also
within third-party packages such as pthreads-w32,[2] which implements pthreads on top of existing
Windows API.

Fig. 5.7 POSIX Thread creation syntax

Win32 Threads:

Win32 threads are the threads supported by various flavors of Windows Operating Systems.

TheWin32Application Programming Interface (Win32API) libraries provide the standard set of Win
32 thread creation and management functions.

Pre-emptive Scheduling
 Employed in systems, which implements preemptive multitasking

 Every task in the ‘Ready’ queue gets a chance to execute.

 When and how often each process gets a chance to execute (gets the CPU time) is dependent
on the type of preemptive scheduling algorithm used for scheduling the processes

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

 The scheduler can preempt (stop temporarily) the currently executing task/process and select
another task from the ‘Ready’ queue for execution

 When to pre-empt a task and which task is to be picked up from the ‘Ready’ queue for execution
after preempting the current task is purely dependent on the scheduling algorithm

 A task which is preempted by the scheduler is moved to the ‘Ready’ queue.

 The act of moving a ‘Running’ process into the ‘Ready’ queue by the scheduler, without the
process requesting for it is known as ‘Preemption’

 Time-based preemption and priority-based preemption are the two important approaches
adopted in preemptive scheduling

Scheduling algorithm
The kernel service/application, which implements the scheduling algorithm, is known as ‘Scheduler’. The
process scheduling decision may take place when a process switches its state to
1. ‘Ready’ state from ‘Running’ state
2. ‘Blocked/Wait’ state from ‘Running’ state
3. ‘Ready’ state from ‘Blocked/Wait’ state
4. ‘Completed’ state

The selection of a scheduling criterion/algorithm should consider the following factors:


CPU Utilisation
Throughput
Turnaround Time
Waiting Time
Response Time
Job Queue
Ready Queue
Device Queue
CPU Utilisation:
The scheduling algorithm should always make the CPU utilisation high. CPU utilisation is a direct measure
of how much percentage of the CPU is being utilised.
Throughput:
This gives an indication of the number of processes executed per unit of time. The throughput for a good
scheduler should always be higher.
Turnaround Time:
It is the amount of time taken by a process for completing its execution. It includes the time spent by the
process for waiting for the main memory, time spent in the ready queue, time spent on
completing the I/O operations, and the time spent in execution. The turnaround time should be a minimal
for a good scheduling algorithm.
Waiting Time:
It is the amount of time spent by a process in the ‘Ready’ queue waiting to get the CPU time for execution.
The waiting time should be minimal for a good scheduling algorithm.
Response Time:
It is the time elapsed between the submission of a process and the fi rst response. For a good scheduling
algorithm, the response time should be as least as possible.

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Non-preemptive Scheduling
1)First-Come-First-Served (FCFS)/ FIFO Scheduling:
EXAMPLE-1:
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together in the order P1, P2, P3. Calculate the waiting time and Turn
Around Time (TAT) for each process and the average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes).

Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (0+10+15)/3 = 25/3
= 8.33 milliseconds
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (10+15+22)/3 = 47/3
= 15.66 milliseconds
Average Turn Around Time (TAT) is the sum of average waiting time and average execution time.
Average Execution Time = (Execution time for all processes)/No. of processes
= (Execution time for (P1+P2+P3))/3
= (10+5+7)/3 = 22/3
= 7.33
Average Turn Around Time = Average waiting time + Average execution time
= 8.33 + 7.33
= 15.66 milliseconds

EXAMPLE- 2:
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes) for the above example if the
process enters the ‘Ready’ queue together in the order P2, P1, P3.

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Assuming the CPU is readily available at the time of arrival of P2, P2 starts executing without any waiting
in the ‘Ready’ queue. Hence the waiting time for P2 is zero. The waiting time for all processes is given as
Waiting Time for P2 = 0 ms (P2 starts executing fi rst)
Waiting Time for P1 = 5 ms (P1 starts executing after completing P2)
Waiting Time for P3 = 15 ms (P3 starts executing after completing P2 and P1)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P2+P1+P3)) / 3
= (0+5+15)/3 = 20/3
= 6.66 milliseconds
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P2+P1+P3)) / 3
= (5+15+22)/3 = 42/3
= 14 milliseconds

2)The Last-Come-First Served (LCFS) scheduling algorithm also allocates CPU time to the processes
based on the order in which they are entered in the ‘Ready’ queue. The last entered process is serviced first.
LCFS scheduling is also known as Last In First Out ( LIFO) where the process, which is put last into the
‘Ready’ queue, is serviced first.
EXAMPLE:
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together in the order P1, P2, P3 (Assume only P1 is present in the
‘Ready’ queue when the scheduler picks it up and P2, P3 entered ‘Ready’ queue after that). Now a new
process P4 with estimated completion time 6 ms enters the ‘Ready’ queue after 5 ms of scheduling P1.
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes).

Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue + Execution Time = (Execution
Start Time – Arrival Time) + Estimated Execution Time = (10 – 5) + 6 = 5 + 6)
LCFS scheduling is not optimal and it also possesses the same drawback as that of FCFS algorithm.
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4
= 18 milliseconds
LCFS scheduling is not optimal and it also possesses the same drawback as that of FCFS algorithm.

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

3)Shortest Job First Scheduling (SJF) Algorithm:

 This algorithm associates with each process if the CPU is available.


 This scheduling is also known as shortest next CPU burst, because the scheduling is done by
examining the length of the next CPU burst of the process rather than its total length. Consider
the following example:

Three processes with process IDs P1 P2 P3 with estimated completion time 10 5 7 milliseconds
respectively enters the ready queue together A new process P4 with an estimated completion time of
2 ms enters the queue after 2 ms

At the beginning, there are only three processes (P1 P2 and P3 available in the ready queue and the
SRT scheduler picks up the process with the shortest remaining time for execution completion (In this
example P2 with remaining time 5 ms) for scheduling

Now process P4 with estimated execution completion time 2 ms enters the Ready queue after 2 ms of
start of execution of P2 .The processes are re scheduled for execution in the following order

WT for P2 = 0 ms + 2 ms = 2ms (P2 starts executing first and is interrupted by P4 and has to wait till
the completion of P4 to get the next CPU slot)

WT for P4 = 0 ms (P4 starts executing by pre-empting P2 since the execution time for completion of
P4 (2ms) is less than that of the Remaining time for execution completion of P2 (3ms here))

WT for P3 = 7 ms (P3 starts executing after completing P4 and P2)


WT for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)

AWT = (0 + 2 + 7 + 14)/4 = 5.75 milliseconds

TAT for P2 = 7 ms

TAT for P4 = Time spent in Ready Queue + Execution Time

= (Execution Start Time-Arrival Time) + Estimated Execution Time

= (2-2) + 2= 2 ms

TAT for P3 = 14 ms (Time spent in Ready Queue + Execution Time)

TAT for P1 = 24 ms (Time spent in Ready Queue + Execution Time)

ATAT = (Turn Around Time for (P2+P4+P3+P1)) / 4


= (7+2+14+24)/4 = 47/4= 11.75 milliseconds

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

4) Priority Based Scheduling :


The Turn Around Time (TAT) and waiting time for processes in non-preemptive scheduling varies with the
type of scheduling algorithm. Priority based non-preemptive scheduling algorithm ensures that a process
with high priority is serviced at the earliest compared to other low priority processes in the ‘Ready’ queue.
The priority of a task/process can be indicated through various mechanisms. The Shortest Job First (SJF)
algorithm can be viewed as a priority based scheduling where each task is prioritised in the order of the
time required to complete the task. The lower the time required for completing a process the higher is its
priority in SJF algorithm.
Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds and
priorities 0, 3, 2 (0—highest priority, 3—lowest priority) respectively enters the ready queue together.
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes) in priority based scheduling
algorithm. The scheduler sorts the ‘Ready’ queue based on the priority and schedules the process with the
highest priority (P1 with priority number 0) fi rst and the next high priority process (P3 with priority
number 2) as second, and so on. The order in which the processes are scheduled for execution is represented
as

The waiting time for all the processes are given as


Waiting time for P1 = 0 ms (P1 starts executing fi rst)
Waiting time for P3 = 10 ms (P3 starts executing after completing P1)
Waiting time for P2 = 17 ms (P2 starts executing after completing P1 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P3+P2)) / 3
= (0+10+17)/3 = 27/3
= 9 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P3 = 17 ms (-Do-)
Turn Around Time (TAT) for P2 = 22 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P3+P2)) / 3
= (10+17+22)/3 = 49/3
= 16.33 milliseconds

Preemptive Scheduling
Preemptive scheduling is employed in systems, which implements preemptive multitasking model. In
preemptive scheduling, every task in the ‘Ready’ queue gets a chance to execute. When and how often each
process gets a chance to execute (gets the CPU time) is dependent on the type of preemptive scheduling
algorithm used for scheduling the processes.

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

1) Preemptive SJF Scheduling/ Shortest Remaining Time (SRT)


The non-preemptive SJF scheduling algorithm sorts the ‘Ready’ queue only after completing the execution
of the current process or when the process enters ‘Wait’ state, whereas the preemptive SJF scheduling
algorithm sorts the ‘Ready’ queue when a new process enters the ‘Ready’ queue and checks whether the
execution time of the new process is shorter than the remaining of the total estimated time for the currently
executing process. If the execution time of the new process is less, the currently executing process is
preempted and the new process is scheduled for execution. Thus preemptive SJF scheduling always
compares the execution completion time (It is same as the remaining time for the new process) of a new
process entered the ‘Ready’ queue with the remaining time for completion of the currently executing
process and schedules the process with shortest remaining time for execution. Preemptive SJF scheduling is
also known as Shortest Remaining Time ( SRT) scheduling.
Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together. A new process P4 with estimated completion time 2 ms enters
the ‘Ready’ queue after 2 ms. Assume all the processes contain only CPU operation and no I/O operations
are involved. At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’
queue and the SRT scheduler picks up the process with the shortest remaining time for execution
completion (In this example, P2 with remaining time 5 ms) for scheduling. The execution sequence diagram
for this is same as that of example 1 under non-preemptive SJF scheduling.

The waiting time for all the processes are given as


Waiting time for P2 = 0 ms + (4 – 2) ms = 2 ms (P2 starts executing fi rst and is interrupted by P4 and has
to wait till the completion of P4 to get the next CPU slot) Waiting time for P4 = 0 ms (P4 starts executing
by preempting P2 since the execution time for completion of P4 (2 ms) is less than that of the Remaining
time for execution completion of P2 (Here it is 3 ms))
Waiting time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms (Time spent in Ready Queue + Execution Time
= (Execution Start Time – Arrival Time) + Estimated Execution Time = (2 – 2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

Average Waiting Time in non-preemptive SJF scheduling = 6 ms


Average Waiting Time in preemptive SJF scheduling = 5.75 ms
Average Turn Around Time in non-preemptive SJF scheduling = 12 ms
Average Turn Around Time in preemptive SJF scheduling = 11.75 ms

2) Round Robin Scheduling Algorithm:

This type of algorithm is designed only for the time sharing system. It is similar to FCFS scheduling
with preemption condition to switch between processes. A small unit of time called quantum time or
time slice is used to switch between the processes. The average waiting time under the round robin
policy is quiet long.

Fig. 5.9 Round Robin Scheduling

Consider the following example:

Three processes P 1 P 2 P 3 with estimated completion times of 6 4 2 ms respectively, enters the ready
queue together in the order P 1 P 2 P 3 Calculate the WT, AWT, TAT ATAT in RR algorithm with
Time slice= 2 ms

The scheduler sorts the Ready queue based on the FCFS policy and picks up P 1 from the ‘queue and
executes it for the time slice 2 ms

When the time slice is expired, P 1 is preempted and P 2 is scheduled for execution The Time slice
expires after 2 ms of execution of P 2 Now P 2 is preempted and P 3 is picked up for execution. P3
completes its execution within the time slice and the scheduler picks P1 again for execution for the
next time slice This procedure is repeated till all the processes are serviced

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

WT for P1 = 0 + (6-2) + (10-8) = 6ms (P1 starts executing first and waits for two time slices to get
execution back and again 1 time slice for getting CPU time)

WT for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1 executes for 1 time slice and
waits for two time slices to get the CPU time)

WT for P3 = (4-0) = 4ms (P3 starts executing after completing the first time slices for P1 & P2 and
completes its execution in a single time slice.)

AWT = 5.33 milliseconds

TAT for P1 = 12 ms TAT for P2 = 10 ms TAT for P3 = 6 ms

ATAT = 9.33 milliseconds

3)Priority Based Scheduling


Priority based preemptive scheduling algorithm is same as that of the non-preemptive priority based
scheduling except for the switching of execution between tasks. In preemptive scheduling, any high priority
process entering the ‘Ready’ queue is immediately scheduled for execution whereas in the non-preemptive
scheduling any high priority process entering the ‘Ready’ queue is scheduled only after the currently
executing process completes its execution or only when it voluntarily relinquishes the CPU. The priority of
a task/ process in preemptive scheduling is indicated in the same way as that of the mechanism adopted for
non preemptive multitasking.
Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds and
priorities 1, 3, 2 (0—highest priority, 3—lowest priority) respectively enters the ready queue together. A
new process P4 with estimated completion time 6 ms and priority 0 enters the ‘Ready’ queue after 5 ms of
start of execution of P1. Assume all the processes contain only CPU operation and no I/O operations are
involved. At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the scheduler picks up the process with the highest priority (In this example P1 with priority 1) for
scheduling. Now process P4 with estimated execution completion time 6 ms and priority 0 enters the
‘Ready’ queue after 5 ms of start of execution of P1. Since the scheduling algorithm is preemptive, P1 is
preempted by P4 and P4 runs to completion. After 6 ms of scheduling, P4 terminates and now the scheduler
again sorts the ‘Ready’ queue for process with highest priority. Since the priority for P1 (priority 1), which
is preempted by P4 is higher than that of P3 (priority 2) and P2 ((priority 3), P1 is again picked up for
execution by the scheduler. Due to the arrival of the process P4 with priority 0, the ‘Ready’ queue is
resorted in the order P1, P4, P1, P3, P2. At the beginning it was P1, P3, P2. The execution sequence now
changes as per the following diagram

D E P T . O F CSE | AJIET
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

The waiting time for all the processes are given as


Waiting time for P1 = 0 + (11 – 5) = 0 + 6 = 6 ms
(P1 starts executing fi rst and gets preempted by P4 after 5 ms and again gets the CPU
time after completion of P4)
Waiting time for P4 = 0 ms
(P4 starts executing immediately on entering the ‘Ready’ queue, by preempting P1)
Waiting time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time – Arrival Time)
+ Estimated Execution Time = (5 – 5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2 + P4 + P3 + P1)) / 4
= (16 + 6 + 23 + 28)/4 = 73/4
= 18.25 milliseconds

Priority based preemptive scheduling gives Real-Time attention to high priority tasks. Thus priority
based preemptive scheduling is adopted in systems which demands ‘Real-Time’ behaviour. Most of the
RTOSs make use of the preemptive priority based scheduling algorithm for process scheduling. Preemptive
priority based scheduling also possesses the same drawback of non-preemptive priority based scheduling–
‘ Starvation’. This can be eliminated by the ‘ Aging’ technique. Refer the section Non-preemptive priority
based scheduling for more details on ‘Starvation’ and ‘Aging’.

Process Synchronization

A co-operation process is one that can affect or be affected by other processes executing in the system.
Co-operating process may either directly share a logical address space or be allotted to the shared data
only through files. This concurrent access is known as Process synchronization.

Deadlock:

In a multiprogramming environment several processes may compete for a finite number of resources.
A process request resources; if the resource is available at that time a process enters the wait state.
Waiting process may never change its state because the resources requested are held by other waiting
process. This situation is known as deadlock.

AJIET | CSE
INTRODUCTION TO E M B E D D E D S Y S T E M S |
BETCK105J
M O D U L E 5 : Real-time Operating System(RTOS) based Embedded System

1) What is OS? Explain The Operating System Architecture with a neat diagram.
2) Compare GPOS and RTOS
3) Define kernel.Give the difference between the types of kernel with a neat diagram.
4) Mention the different functions of the real time kernel and explain briefly.
5) What is a process? Explain the structure the process with a neat diagram.
6) What is a PCB? Mention the memory organisation of a process/thread.
7) Explain the concept of multithreading with a neat diagram.
8) Compare process and thread.
9) What is context switching ? Explain it with a neat diagram.
10) What is a multitasking /multiprocessing? Illustrate with problem examples all types of multitasking.

D E P T . O F CSE | AJIET

You might also like