Embedded System: Real-Time Operating Systems
Embedded System: Real-Time Operating Systems
Embedded System: Real-Time Operating Systems
Chapter 4
Real-time Operating systems
Abebaw.Z
1
Outlines
Introduction
Context switching mechanisms
Scheduling policies
Message passing and shared memory communications
Inter-process communication
2
Introduction
An operating system is a program that
Provides an “abstraction” of the physical machine
Provides a simple interface to the machine
An OS is also a resource manager
provides access to the physical resources of a
machine
manages computer hardware, software resources,
and provides common services for computer
program
3
Introduction
OS tasks
1. Process Management
In computing, a process is the instance of a computer program that is being
executed by CPU.
Process creation
Process loading
Process execution control
Interaction of the process with signal events
Process monitoring
CPU allocation
Process termination
4
Introduction
2. Inter-process Communication
Synchronization and coordination
Deadlock detection
Process Protection
Data Exchange Mechanisms
3. Memory Management
Services for file creation, deletion, reposition and
protection
4. Input / Output Management
Handles requests and release subroutines for a
variety of peripherals and
read, write and reposition programs
5
Introduction
What is real time operating system (RTOS)?
Are those systems in which the correctness of the system
depends on the logical result of computation and also on
the time at which the results are produced.
It is intended to serve real-time applications that process data
as it comes in, typically without buffer delays.
Processing time requirements (including any OS delay) are
measured in tenths of seconds or shorter increments of time.
Are time bounded system
if the timing constraints of the system are not met system
failure is said to have occurred
6
Introduction
Classifications of RTOS
1. Hard Real Time System
Failure to meet deadlines is fatal
example : Flight Control System
2. Soft Real Time System
Late completion of jobs is undesirable but not fatal
System performance degrades as more & more jobs miss
deadlines
Example : Online Databases
7
RTOS Architecture
usually it is just a kernel (the essential center of operating system )
but for complex system it includes modules like
networking protocol
stacks debugging facilities,
device I/Os.
kernel acts as an abstraction layer between the hardware and the applications
The Kernel provides
an interrupt handler
task scheduler
resource sharing flags and
memory management
8
Context switching mechanisms
A context switch
process switch
switching of the CPU from one process to another
is an essential feature of multitasking operating systems
How multitasking is working with single CPU?
A multitasking operating system
CPU seemingly simultaneously execute multiple tasks/process
it is an illusion of concurrency
It is achieved by means of context switching
9
Context switching
Task
Decomposed application into small, schedulable, and sequential
program units
governed by three time-critical properties;
Release time refers to the point in time from which the task can be
executed.
Deadline is the point in time by which the task must complete.
Execution time denotes the time the task takes to execute.
10
Context switching
Each task may exist in following states
Dormant : Task doesn’t require
CPU time
Ready: Task is ready to go active
state, waiting CPU time
Active: Task is running
Suspended: Task put on hold
temporarily
Pending: Task waiting for resource.
11
Context switching
What happened during switching ?
context of the to-be-suspended task will be saved
context of the to-be-executed task will be retrieved
Task Control block(TCB): A data structure having the
information using which the RTOS controls the process state.
Task uses TCBs to remember its context.
accessible only by RTOS
Stores in protected memory area of the kernel
Information on TCB could be
Task_ID
Task_State
Task_Priority
Task_Stack_Pointer
Task_Prog _Counter
12
Scheduling policies
keeping record of the state of each task and allocates the CPU to
one of them.
More information about the tasks are required
Number of tasks
Resource Requirements
Execution time
Deadlines
Scheduling algorithms
Clock Driven Scheduling
Weighted Round Robin Scheduling
Priority Scheduling
13
Scheduling Algorithms
Clock Driven
All parameters about jobs (execution time/deadline) known in
advance
Schedule can be computed at some regular time instances
Minimal runtime overhead
Not suitable for many applications
Weighted Round Robin
Jobs scheduled in FIFO manner
Time quantum given to jobs is proportional to it’s weight
Example use : High speed switching network
Not suitable for precedence constrained jobs.
Job A can run only after Job B.
14
Scheduling Algorithms
Priority Scheduling
Processor never left idle when there are ready tasks
Processor allocated to processes according to priorities
Priorities
Static - at design time
Dynamic - at runtime
Earliest Deadline First (EDF)
Process with earliest deadline given highest priority
Least Slack Time First (LSF)
slack = relative deadline – execution left
Rate Monotonic Scheduling (RMS)
Tasks priority inversely proportional to it’s period
For periodic tasks
15
Schedulers(Dispatchers)
Are parts of the kernel responsible for determining which task
runs next.
Most real-time kernels use priority-based scheduling
Each task is assigned a priority based on its importance
The priority is application-specific
Priority-Based Kernels
Non-preemptive
Preemptive
16
Non-Preemptive Kernels
Perform “cooperative multitasking”
Each task must explicitly give up control of the CPU
This must be done frequently to maintain the illusion of
concurrency
Asynchronous events are still handled by Interrupt
Service Routines(ISRs)
ISRs can make a higher-priority task ready to run
But ISRs always return to the interrupted tasks
17
Non-Preemptive Kernels
Advantages
Interrupt latency is typically low
Task-response is now given by the time of the longest task
Less need to guard shared data
Disadvantage
Responsiveness
A higher priority task might have to wait for a long time
Response time is nondeterministic
18
Preemptive Kernels
The highest-priority task ready to run is always given control
of the CPU
If an ISR makes a higher-priority task ready, the higher-priority
task is resumed (instead of the interrupted task)
Execution of the highest-priority task is deterministic
Task-level response time is minimized
19
Message passing and shared memory communications
Message Passing
is a form of communication used in inter-process communication.
The producer typically uses send() system call to send messages, and
20
Message passing and shared memory communications
Message Queue
kernels provide an object called a message queue
used to place task send or received messages
a buffer-like object through which tasks and ISRs send and receive
messages to communicate and synchronize with each others
Message queues have
an associated message queue control block (QCB),
a name,
a unique ID,
memory buffers,
a message queue length,
a maximum message length,
21
Message passing and shared memory communications
Shared Memory communication
Is an OS provided abstraction
allows a memory region to be simultaneously accessed by
multiple programs
One process will create an area in RAM which other processes
can access
Since both processes can access the shared memory area like
regular working memory, this is a very fast way of
communication
it is less powerful, as for example the communicating processes
must be running on the same machine
22
Inter process communication
Tasks usually need to communicate and synchronize with
each other for different reasons such as:
Accessing a shared resource
To signal the occurrence of events to each other
RTOS provides inbuilt inter-task primitives, which
are kernel objects that facilitate these synchronization and
communication
Examples of such objects include
Semaphores
Message queues
Signal, pipes and so on
23
Semaphores
It is a variable or abstract data type used to control access to a
common resource by multiple processes and avoid critical section
problems in a concurrent system such as a multitasking operating
system.
Used for:
Mutual exclusion
Signaling the occurrence of an event
Synchronizing activities among tasks
semaphores have
An associated semaphore control block (SCB),
A unique ID,
A user-assigned value (binary or a count), and
A task-waiting list.
24
Semaphore Operations
There are two types
Binary semaphores
0 or 1
If value =0 => semaphore is not available
If value = 1 => semaphore available
Counting semaphores
>= 0
uses a count to allow it to be acquired or released a resources that has
multiple instances.
Initialize (or create)
Value must be provided
Waiting list is initially empty
25
Semaphore Operations (cont.)
Wait (or pend)
Used for acquiring the semaphore
If the semaphore is available (the semaphore value is positive), the value is
decremented, and the task is not blocked
Otherwise, the task is blocked and placed in the waiting list
Most kernels allow you to specify a timeout
If the timeout occurs, the task will be unblocked and an error code will be
returned to the task
Signal (or post)
Used for releasing the semaphore
If no task is waiting, the semaphore value is incremented
Otherwise, make one of the waiting tasks ready to run but the
value is not incremented
Which waiting task to receive the key?
Highest-priority waiting task
First waiting task
26
Sharing I/O Devices
27
Sharing I/O Device (cont.)
In the example, each task must know about the semaphore in
order to access the device
A better solution:
Encapsulate the semaphore
28
Encapsulating a Semaphore
29
Applications of Counting Semaphores
A counting semaphore is used when a resource can be used
by more than one task at the same time
Example:
Managing a buffer pool of 10 buffers
30
RTOS need to provide
Multitasking Capabilities:
A RT application is divided into multiple tasks.
The separation into tasks helps to keep the CPU busy.
Short Interrupt Latency :
Interrupt Latency = Hardware delay to get interrupt signal
to the processor + time to complete the current instruction
+ time executing system code in preparation for transferring
execution to the devices interrupt handler.
31
RTOS need to provide
Fast Context Switch :
The time between the OS recognizing that the awaited event has
arrived and the beginning of the waiting task is called context
switch time(dispatch latency).
This switching time should be minimum
Control Of Memory Management :
an OS should provide way for task to lock its code and data into real
memory.
so that it can guarantee predictable response to a interrupt.
32
RTOS need to provide
Proper scheduling :
OS must provide facility to schedule properly time constrained
tasks.
Fine granularity Timer Services :
Millisecond resolution is bare minimum .
Microseconds resolution is required in some cases.
Rich set of Inter Task Communication Mechanism :
Message queues,
shared memory ,
Synchronization –Semaphores, event flags
33