Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit Ii

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 90

Unit-II

RTOS
A real-time operating system is an operating system
intended to serve real-time applications that
process data as it comes in, typically without buffer
delays. Processing time requirements are measured in
tenths of seconds or shorter increments of time
A RTOS has an advanced algorithm for scheduling.
Scheduler flexibility enables a wider
Key factors in a real-time OS are minimal
interrupt latency and minimal
thread switching latency
 a real-time OS is valued more for how quickly or how
predictably it can respond than for the amount of
work it can perform in a given period of time
RTOS USES
Airlines reservation system.
Air traffic control system.
Systems that provide immediate updating.
Used in any system that provides up to date and
minute information on stock prices.
Defense application systems like RADAR.
Networked Multimedia Systems.
Command Control Systems
RTOS NAMES
eCos
 LynxOS
 QNX
 RTLinux
 Symbian OS
 VxWorks
Windows CE
 MontaVista Linux
RTOS
RTOSes apart from GPOSes include:
better reliability in embedded application contexts,
the ability to scale up or down to meet application
needs,
faster performance,
reduced memory requirements,
scheduling policies tailored for real-time embedded
systems,
support for diskless embedded systems by allowing
executables to boot and run from ROM or RAM, and
better portability to different hardware platforms.
High-level view of an RTOS
RTOS COMPONENTS
1.Scheduler-is contained within each kernel and follows
a set of algorithms that determines which task executes
when. Some common examples of scheduling algorithms
include round-robin and preemptive scheduling.
Process and events
2.Objects-are special kernel constructs that help
developers create applications for real-time
embedded systems. Common kernel objects include
tasks, semaphores, and message queues.
3.Services-are operations that the kernel performs on
an object or, generally operations such as timing,
interrupt handling, and resource management
RTOS COMPONENT DIAGRAM
Scheduler
The scheduler is at the heart of every kernel

Every process has exe path and associated


state(ready ,running, exit/block)
PROCESS: sequential program in execution
it describes the following topics

schedulable entities,
multitasking,
context switching,
dispatcher, and

Schedulable Entities
A schedulable entity is a kernel object that can
compete for execution time on a system, based
on a predefined scheduling algorithm
Tasks and processes are all examples of
schedulable entities found in most kernels
A task is an independent thread of execution
that contains a sequence of independently
schedulable instructions
Events and process are entities
Semaphores and message queue are not
schedulable entities IPC objects for process sync
Multitasking

Multitasking is the ability


of the operating system to
handle multiple activities
within set deadlines
the kernel multitasks in
such a way that many
threads of execution
appear to be running
concurrently
As the number of tasks to
schedule increases, so do
CPU performance
requirements
The Context Switch

Each task has its own context, which is the state of


the CPU registers required each time it is scheduled
to run.

 A context switch occurs when the scheduler


switches from one task to another

Every time a new task is created, the kernel also


creates and maintains an associated task control
block (TCB). TCBs are system data structures that
the kernel uses to maintain task-specific informatio
The Context Switch

As shown in Figure,when the


kernel’s scheduler determines
that it needs to stop running task
1 and start running task 2, it takes
the following steps:
The kernel saves task 1’s context
information in its TCB.
It loads task 2’s context
information from its TCB, which
becomes the current thread of
execution.
The context of task 1 is frozen
while task 2 executes, but if the
scheduler needs to run task 1
again, task 1 continues from
where it left off just before the
context switch.
The Dispatcher
The dispatcher is the part of the scheduler that
performs context switching and changes the flow of
execution
Flow of control have 3 areas
through an application task,
through an ISR[Interrupt Service Routine],
through the kernel
When it is time to leave the kernel, the dispatcher is
responsible for passing control to one of the tasks in the
user’s application
Scheduling Algorithms

preemptive priority-based
scheduling, and
round-robin scheduling.
preemptive priority-based scheduling
Real-time kernels generally support 256 priority
levels, in which 0 is the highest and 255 the lowest.
Some kernels appoint the priorities in reverse order,
where 255 is the highest and 0 the lowest

a preemptive priority-based scheduler, each task has a


priority, and the highest-priority task runs first

. If a task with a priority higher than the current task


becomes ready to run, the kernel immediately saves the
current task’s context in its TCB and switches to the
higher-priority task. As shown in above Figure task 1 is
preempted by higher-priority task 2, which is then
preempted by task 3
Round-Robin Scheduling
Round-Robin Scheduling
Round-robin scheduling provides each task an equal share of
the CPU execution time
With time slicing, each task executes for a defined interval, or
time slice, in an ongoing cycle, which is the round robin. A run-
time counter tracks the time slice for each task, incrementing
on every clock tick. When one task’s time slice completes, the
counter is cleared, and the task is placed at the end of the cycle.
Newly added tasks of the same priority are placed at the end of
the cycle, with their run-time counters initialized to 0.
If a task in a round-robin cycle is preempted by a higher-
priority task, its run-time count is saved and then restored
when the interrupted task is again eligible for execution. This
idea is illustrated in Figure 4.5, in which task 1 is preempted by a
higher-priority task 4 but resumes where it left off when task 4
Objects
The most common RTOS kernel objects are
Tasks—are concurrent and independent threads of
execution that can compete for CPU execution time.
Semaphores—are token-like objects that can be
incremented or decremented by tasks for synchronization
or mutual exclusion.
Message Queues—are buffer-like data structures that
can be used for synchronization, mutual exclusion, and
data exchange by passing messages between tasks.

Developers creating real-time embedded applications can


combine these basic kernel objects (as well as others not
mentioned here) to solve common real-time design
problems, such as concurrency, activity synchronization,
Services

These services comprise sets of API calls

Timer management,
Interrupt handling,
Device I/O, and
Memory management
Key Characteristics of an RTOS

reliability
predictability
performance
compactness and
scalability
Reliability

Embedded systems must be reliable

Different degrees of reliability may be required


predictability
The RTOS used in this case needs to be predictable to
a certain degree.

The term deterministic describes RTOSes with


predictable behavior, in which the completion of
operating system calls occurs within known
timeframes
Performance

This requirement dictates that an embedded system must


perform fast enough to fulfill its timing requirements

the processor's performance is expressed in million


instructions per second (MIPS).

Data transfer throughput is typically measured in multiples


of bits per second (bps)

Sometimes developers measure RTOS performance on a


call-by-call basis
Compactness
Application design constraints and cost constraints help
determine how compact an embedded system

A cell phone clearly must be small, portable, and low


cost. These design requirements limit system memory,
which in turn limits the size of the application and
operating system.

the RTOS clearly must be small and efficient


Scalability
RTOSes can be used in a wide variety of embedded
systems, they must be able to scale up or down to
meet application-specific requirements

an RTOS should be capable of adding or deleting


modular components, including file systems and
protocol stacks.
Tasks

Task definition
Task states and scheduling
Typical task operations
Typical task structure and
Task coordination and concurrency
Defining a Task

A task is schedulable
the task is able to
compete for execution
time on a system, based
on a predefined schedule
Specifically, upon
creation, each task has
an associated name, a
unique ID, a priority (if
part of a preemptive
scheduling plan), a task
control block (TCB), a
stack, and a task routine
Examples of system tasks include
Initialization or startup task—initializes the system
and creates and starts system tasks

Idle task—uses up processor idle cycles when no other


activity is present

logging task—logs system messages,

Exception-handling task—handles exceptions, and

Debug agent task—allows debugging with a host


debugger.
Task States and Scheduling
ready state-the task is
ready to run but cannot
because a higher priority
task is executing.
blocked state-the task has
requested a resource that is
not available, has requested
to wait until some event
occurs, or has delayed itself
for some duration.
running state-the task is
the highest priority task
and is running.
Ready State
 Tasks 1, 2, 3, 4, and 5 are ready to run and are
waiting in the task-ready list.
 Because task 1 has the highest priority (70), it
is the first task ready to run. If nothing higher
is running, the kernel removes task 1 from the
ready list and moves it to the running state.
 During execution, task 1 makes a blocking call.
As a result, the kernel moves task 1 to the
blocked state; takes task 2, which is first in the
list of the next-highest priority tasks (80), off
the ready list; and moves task 2 to the running
state.
 Next, task 2 makes a blocking call. The kernel
moves task 2 to the blocked state; takes task 3,
which is next in line of the priority 80 tasks,
off the ready list; and moves task 3 to the
running state.
 As task 3 runs, frees the resource that task 2
requested. The kernel returns task 2 to the
ready state and inserts it at the end of the list
of tasks ready to run at priority level 80. Task 3
continues as the currently running task.
Running State

only one task can run at a time


When a task moves from the running state to the
ready state, it I
Unlike a ready task, a running task can move to the
blocked state in any of the following ways:
by making a call that requests an unavailable
resource,
by making a call that requests to wait for an event to
occur, and
by making a call to delay the task for some duration.
In each of these cases, the task is moved from the
running state to the blocked state, as described next.
s preempted by a higher priority task
Blocked State
The possibility of blocked states is
extremely important in real-time systems
because without blocked states, lower
priority tasks could not run
When a task becomes unblocked, the task
might move from the blocked state to the
ready state if it is not the highest priority
task
the unblocked task is the highest priority
task, the task moves directly to the running
state and preempts the currently running
task
Typical Task Operations

A kernel, however, also provides an API that allows


developers to manipulate tasks

I. Creating and deleting tasks

1.create---------------------create a task
2.delete---------------------delete a task

II. controlling task scheduling, and

III. Obtaining task information

Get ID Get the current task’s ID


controlling task scheduling
Suspend Suspends a task
Resume Resumes a task
Delay Delays a task
Restart (polling) Restarts a task
Get Priority Gets the current task’s
priority
Set Priority Dynamically sets a task’s
priority
Preemption lock Locks out higher priority
tasks from preempting the current task
Preemption unlock Unlocks a preemption
Task Structure

1.Run to Completion
{
Initialize the application;----services, kernel OBJ
Create endless loop tasks
Create kernel object
Delete objects
}
EndlessLoop
{
Initialization code()

LoopForever
{
Body of the loop
Blocking calls
}
}
Semaphores
Def-A semaphore (sometimes called a
semaphore token) is a kernel object that one or
more threads of execution can acquire or
release for the purposes of synchronization or
mutual exclusion

defining a semaphore,
typical semaphore operations, and
common semaphore use
Defining Semaphores
Binary Semaphores
Counting Semaphores
Mutual Exclusion (Mutex)
Semaphores
Defining Semaphores
operations
 1.down 2.up
 Wait(S)
 While s<=0;no loop Signal(S)
 S--; {
} S++;
• }
Binary Semaphores
A binary semaphore can
have a value of either 0 or
1
When a binary
semaphore’s value is 0,
the semaphore is
considered unavailable
(or empty); when the
value is 1, the binary
semaphore is considered
available (or full )
Binary semaphore S1,S2
S1=1,S2=0;
Wait(S1) Signal(S1)
C--; wait(S1)
If(c<0) c++;
{ if(c<=0)
Signal(S1); //block Signal(s2);
Wait(S2); //process suspend else Signal(S1);
}//sleep mode
Down,p wait Up v,signal
Counting Semaphores
A counting semaphore uses a
count to allow it to be acquired
or released multiple times
.When creating a counting
semaphore, assign the
semaphore a count that denotes
the number of semaphore
tokens it has initially.
If the initial count is 0, the
counting semaphore is created
in the unavailable state. If the
count is greater than 0, the
semaphore is created in the
available state, and the number
of tokens it has equals its count
Mutual Exclusion (Mutex) Semaphores

A mutual exclusion
(mutex) semaphore is a
special binary
semaphore that supports
ownership, recursive
access, task deletion
safety, and one or more
protocols for avoiding
problems inherent to
mutual exclusion
Differences
BINARY SEMAPHORES MUTEX

Used for Used in process Lock in


synchronization critical section
Signaling mechanism Locking mechanism
Highest priority Process accessed
Wait(),signal() released
Multiple threads can Values changed locked()
acquire binary Unlocked()
semaphore Only one thread can
acquire Mutex
semaphores
Typical Semaphore Operations

creating and deleting semaphores


acquiring and releasing semaphores
clearing a semaphore’s task-waiting
list and
 getting semaphore information
operations
1.down 2.up
While s<=0;no loop Signal(S)
S--; {
} S++;
• }
Creating and Deleting semaphores

Operation Description
Create Creates a semaphore
Delete Deletes a semaphore

Binary: Specify the initial Semaphore state and TWO


Counting: Specify the initial Semaphore count and
TWO
Mutex: specify the task waiting order and enable task
deletion safety, ownership
Acquiring and Releasing semaphores
Operation Description
Acquire Acquire a semaphore token
Release Release a semaphore token

Wait forever: the task is block until able to acquire a


semaphore
Wait with a timeout: the task is block until able to acquire a
semaphore or a set of interval of time
Do not wait: The tasks makes a request to acquire a
semaphore token
clearing a semaphore’s task-waiting list

Operation Description
Flush Unblocks all tasks waiting on a semaphore

Flush is a useful for broadcast signaling to a group of


tasks

The operation can be frees all tasks waiting in the


semaphore task waiting list
Getting Semaphore Information

Operation Description
Show info
Show general information about semaphore

Show blocked tasks


Get a list of IDs of tasks that are blocked on a semaphore
Use of Semaphore

wait-and-signal synchronization
multiple-task wait-and-signal synchronization
credit-tracking synchronization
single shared-resource-access synchronization
recursive shared-resource-access synchronization,
and
multiple shared-resource-access synchronization
wait-and-signal synchronization
Two tasks can
communicate for the
purpose of
synchronization without
exchanging data. For
example, a binary
semaphore can be used
between two tasks to
coordinate the transfer
of execution control
tWaitTask()
{
Acquire the binary semaphore token
}
tSignal Task()
{
Release binary semaphore token
}
multiple-task wait-and-signal synchronization

When coordinating the


synchronization of more
than two tasks, use the
flush operation on the
task- waiting list of a
binary semaphore
tWaitTask()
{
Do some process specific to task
Acquire the binary semaphore token
}
tSignal Task()
{
Do some process specific to task

Release binary semaphore token


}
credit-tracking synchronization

 Sometimes the rate at which the


signaling task executes is higher
than that of the signaled task.
In this case, a mechanism is
needed to count each signaling
occurrence. The counting
semaphore provides just this
facility. With a counting
semaphore, the signaling task
can continue to execute and
increment a count at its own
pace, while the wait task, when
unblocked, executes at its own
pace
tWaitTask()
{
Acquire the counting semaphore token
}
tSignal Task()
{
Release counting semaphore token
}
single shared-resource-access synchronization

 One of the more common uses


of semaphores is to provide for
mutually exclusive access to a
shared resource. A shared
resource might be a memory
location, a data structure, or an
I/O device-essentially
anything that might have to be
shared between two or more
concurrent threads of
execution. A semaphore can be
used to serialize access to a
shared resource
tAccessTask()
{
Acquire the binary semaphore token
Read or write to share the resource
Release the binary semaphore token
}
recursive shared-resource-access
synchronization
Sometimes a developer
might want a task to
access a shared resource
recursively. This
situation might exist if
tAccessTask calls
Routine A that calls
Routine B, and all three
need access to the same
shared resource
tAccess Task()
{
Acquire mutex
Access shared resources
Call routines A
Relelse mutex
}
Call routines A
{
Acquire mutex
Access shared resources
Call routines B
Relelse mutex
}
Call routines B
{
Acquire mutex
Access shared resources
Relelse mutex
}
multiple shared-resource-access synchronization

For cases in which


multiple equivalent
shared resources are
used, a counting
semaphore comes in
handy
tAccess Task()
{
Acquire a counting semaphore token
Read or write to shared resource
Release a counting semaphore token
}
Message Queues
Defining message queues,
Message queue states,
Message queue content,
Typical message queue operations, and
Typical message queue use
Defining Message Queues
A message queue is a buffer-like object through which
tasks and ISR[interrupt service routine] send and receive
messages to communicate and synchronize with data
A message queue is like a pipeline
This temporary buffering decouples a sending and
receiving task
a message queue has several associated components that
the kernel uses to manage the queue. When a message
queue is first created, it is assigned an associated queue
control block (QCB), a message queue name, a unique
ID, memory buffers, a queue length, a maximum
message length, and one or more task-waiting lists
A message queue, its associated parameters, and
supporting data structures.

It is the kernel’s job to assign a unique ID to a message


queue and to create its QCB and task- waiting list
Message Queue States
When a message queue is first
created, the FSM is in the
empty state. If a task attempts
to receive messages from this
message queue while the
queue is empty, the task
blocks and, if it chooses to, is
held on the message queue's
task-waiting list, in either a FIFO
or priority-based order
In some kernel
implementations when a task
attempts to send a message to a
full message queue, the sending
function returns an error code
to that task. Other kernel
implementations allow such a
task to block, moving the
blocked task into the sending
task-waiting list, which is
separate from the receiving
task-waiting list.
Message Queue Content

Message queues can be used to send and receive a variety


of data. Some examples include:
a temperature value from a sensor,
a bitmap to draw on a display,
a text message to print to an LCD,
a keyboard event, and
a data packet to send over the network
Message Queue Operations
creating and deleting message queues
sending and receiving messages, and
obtaining message queue information
Creating and Deleting Message Queues

Create Creates a message queue


Delete Deletes a message queue
Sending and Receiving Messages

Send Sends a message to a message queue


Receive Receives a message from a message queue
Broadcast Broadcasts messages
Sending Messages
FIFO and priority-based task-waiting lists
Obtaining Message Queue Information
Show queue info
Gets information on a message queue

Show queue’s task-waiting list


Gets a list of tasks in the queue’s task-waiting list
Message Queue Use

non-interlocked, one-way data communication,


interlocked, one-way data communication,
interlocked, two-way data communication, and
broadcast communication
Non-Interlocked, One-Way Data Communication

One of the simplest


scenarios for message-
based communications
requires a sending task
(also called the message
source), a message
queue, and a receiving
task (also called a
message sink)
tSourceTask ()
{
Send message to message queue
:
}

tSinkTask ()
{
:
Receive message from message queue
:
}
Interlocked, One-Way Data Communication

In some designs, a


sending task might
require a handshake that
the receiving task has
been successful in
receiving the message.
This process is called
interlocked
communication, in
which the sending task
sends a message and
waits to see if the
message is received
tSourceTask ()
{
:
Send message to message queue Acquire binary semaphore
:
}

tSinkTask ()
{
:
Receive message from message queue Give binary semaphore
:
}
Interlocked, Two-Way Data Communication
Sometimes data must
flow bidirectionally
between tasks, which is
called interlocked, two-
way data communication.
This form of
communication can be
useful when designing a
client/server-based
system
tClientTask ()
{
:
Send a message to the requests queue Wait for message from the
server queue
:
}
tServerTask ()
{
:
Receive a message from the requests queue Send a message to the
client queue
:
}
Broadcast Communication

Some message-queue
implementations allow
developers to broadcast a
copy of the same
message to multiple
tasks
tBroadcastTask ()
{
:
Send broadcast message to queue
:
}
Note: similar code for tSignalTasks 1, 2, and 3.
tSignalTask ()
{
:
Receive message on queue
:
}

You might also like