Module 5 RTOS and IDE For Embedded System Design
Module 5 RTOS and IDE For Embedded System Design
(18EC62)
MODULE –
5
Dept of ECE,GAT 3
Operating System
•Basics
The operating system acts as a bridge between the user
applications/tasks and the underlying system resources
through a set of system functionalities and services.
• The OS manages the system resources and makes them
available to the user applications/tasks on a need basis.
• The primary functions of an operating system are:
• Make the system convenient to use
• Organise and manage the system resources efficiently and
correctly
Dept of ECE,GAT 4
Operating System
•Architecture
Figure gives an insight into the User Applications
basic components of an Application
operating system and their Memory Management
Programming
Interface (API)
interfaces with rest of the
Kernel Services
Process Management
world.
Time Management
Device Driver
Interface
Underlying Hardware
Dept of ECE,GAT 5
The
•Kernel
The kernel is the core of the operating system and is responsible for
managing the system resources and the communication among the
hardware and other system services.
• Kernel acts as the abstraction layer between system resources and
user applications.
• Kernel contains a set of system libraries and services.
Dept of ECE,GAT 6
The Kernel
•(continued)
For a general purpose OS, the kernel contains different services for
handling the following:
• Process Management
• Primary Memory Management
• File System Management
• I/O System (Device) Management
• Secondary Storage Management
• Protection Systems
• Interrupt Handler
Dept of ECE,GAT 7
Process Management
• Process management deals with managing the
processes/tasks.
• Process management includes
• Setting up the memory space for the process
• Loading the process's code into the memory
space
• Allocating system resources
• Scheduling and managing the execution of the
process
• Setting up and managing the Process Control
Block (PCB)
• Inter Process Communication and
Dept of ECE,GAT 8
Process Management
Dept of ECE,GAT 9
Primary Memory
•Management
The term primary memory refers to the volatile memory (RAM)
where processes are loaded and variables and shared data
associated with each process are stored.
• The Memory Management Unit (MMU) of the kernel is responsible
for
• Keeping track of which part of the memory area is currently used
by which process
• Allocating and De-allocating memory space on a need basis
(Dynamic memory allocation)
Dept of ECE,GAT 10
Primary Memory
Management
Dept of ECE,GAT 11
Dept of ECE,GAT 12
File System
Management
• File is a collection of related information.
• A file could be a program (source code or executable), text files, image files, word
documents, audio/video files, etc.
• The file system management service of Kernel is responsible for
• The creation, deletion and alteration of files
• Creation, deletion and alteration of directories
• Saving of files in the secondary storage memory (e.g. Hard disk storage)
• Providing automatic allocation of file space based on the amount of free space available
• Providing a flexible naming convention for the files
• The various file system management operations are OS dependent.
• For example, the kernel of Microsoft DOS OS supports a specific set of file system
management operations and they are not the same as the file system operations supported
by UNIX Kernel.
Dept of ECE,GAT 13
Dept of ECE,GAT 14
I/O System (Device)
Management
• Kernel is responsible for routing the I/O requests coming from different user
applications to the appropriate I/O devices of the system.
• In a well-structured OS, the direct accessing of I/O devices are not allowed and the
access to them are provided through a set of Application Programming Interfaces
(APIs) exposed by the kernel.
• The kernel maintains a list of all the I/O devices of the system.
• May be available in advance or updated dynamically as and when a new device is
installed.
• The service Device Manager of the kernel is responsible for handling all I/O device
related operations.
• The kernel talks to the I/O device through a set of low-level systems calls, which
are
implemented in a service called device drivers.
• Device Manager is responsible for
• Loading and unloading of device drivers
Dept•of ECE,GAT
Exchanging information and the system specific control signals to and from the device 15
Dept of ECE,GAT 16
Secondary Storage Management
• The secondary storage management deals with managing the secondary
storage memory devices, if any, connected to the system.
• Secondary memory is used as backup medium for programs and data
since the main memory is volatile.
• In most of the systems, the secondary storage is kept in disks (Hard
Disk).
• The secondary storage management service of kernel deals with
• Disk storage allocation
• Disk scheduling (Time interval at which the disk is activated to backup
data)
• Free Disk space management
Dept of ECE,GAT 17
Protection Systems
• Most of the modern operating systems are designed in such a way to support
multiple users with different levels of access permissions.
• E.g. ‘Administrator’, ‘Standard’, ‘Restricted’ permissions in Windows XP.
• Protection deals with implementing the security policies to restrict the access
to both user and system resources by different applications or processes or
users.
• ln multiuser supported operating systems, one user may not be allowed to
view or modify the whole or portions of another user's data or profile details.
• In addition, some application may not be granted with permission to make
use
of some of the system resources.
• This kind of protection is provided by the protection services running within the
kernel.
Dept of ECE,GAT 18
Interrupt Handler
• Kernel provides handler mechanism for all external/internal
interrupts generated by the system.
Dept of ECE,GAT 19
Kernel Space and User
•Space
The applications/services are classified into two categories:
• User applications
• Kernel applications
• Kernel Space is the memory space at which the kernel code is
located
• Kernel applications/services are kept in this contiguous area of primary
(working) memory.
• It is protected from the unauthorised access by user
programs/applications.
• User Space is the memory area where user applications are loaded
and executed.
Dept of ECE,GAT 20
Kernel Space and User Space
(continued)
• The partitioning of memory into kernel and user space is purely OS dependent.
• Some OS implement this kind of partitioning and protection whereas some OS do
not segregate the kernel and user application code storage into two separate areas.
• In an operating system with virtual memory support, the user applications are
loaded into its corresponding virtual memory space with demand paging
technique.
• The entire code for the user application need not be loaded to the main (primary)
memory at once.
• The user application code is split into different pages and these pages are loaded
into and out of the main memory area on a need basis.
• The act of loading the code into and out of the main memory is termed as
'Swapping’.
• Swapping happens between the main (primary) memory and secondary storage
memory.
Dept of ECE,GAT 21
Monolithic Kernel and
•Microkernel
The kernel forms the heart of an operating system.
• Different approaches are adopted for building an Operating System
kernel.
• Based on the kernel design, kernels can be classified into
• Monolithic Kernel
• Microkernel
Dept of ECE,GAT 22
Monolithic
•Kernel
In monolithic kernel architecture, all kernel services run in the kernel
space.
• Here all kernel modules run within the same memory space under a
single kernel thread.
• The tight internal integration of kernel modules in monolithic kernel
architecture allows the effective utilisation of the low-level features of
the underlying system.
• The major drawback of monolithic kernel is that any error or failure in
any one of the kernel modules leads to the crashing of the entire kernel
application.
• LINUX, SOLARIS, MS-DOS kernels are examples of monolithic kernel.
Dept of ECE,GAT 23
Monolithic Kernel
•(continued)
The architecture representation of a monolithic kernel is given in
the figure.
Applications
Dept of ECE,GAT 24
Microkerne
•lThe microkernel design incorporates only the essential set of Operating
System services into the kernel.
• The rest of the Operating System services are implemented in programs
known as 'Servers' which runs in user space.
• This provides a 'highly modular design and OS-neutral abstract to the
kernel.
• Memory management, process management, timer systems and
interrupt handlers are the essential services, which forms the part of the
microkernel.
• Mach, QNX, Minix 3 kernels are examples for microkernel.
Dept of ECE,GAT 25
Microkernel
•(continued)
The architecture representation of a microkernel is shown in the
figure.
Services (kernel
services running Applications
in user space)
Dept of ECE,GAT 26
Microkernel
•(continued)
Microkernel based design approach offers the following benefits:
• Robustness
• If a problem is encountered in any of the services, which runs as 'Server'
application, the same can be reconfigured and re-started without the need for re-
starting the entire OS.
• Thus, this approach is highly useful for systems, which demands high 'availability’.
• Since the services which run as 'Servers' are running on a different memory space,
the chances of corruption of kernel services are ideally zero.
• Configurability
• Any service which runs as 'Server' application can be changed without the need to
restart the whole system.
• This makes the system dynamically configurable.
Dept of ECE,GAT 27
Types of Operating
Systems
Dept of ECE,GAT 28
Types of Operating
•Systems
Depending on the type of kernel and kernel services, purpose and
type of computing systems where the OS is deployed and the
responsiveness to applications, Operating Systems are classified
into different types.
• General Purpose Operating System (GPOS)
• Real-Time Operating System (RTOS)
Dept of ECE,GAT 29
General Purpose Operating System
(GPOS)
• The operating systems which are deployed in general computing systems are referred
as General Purpose Operating Systems (GPOS).
• The kernel of such a GPOS is more generalised and it contains all kinds of services
required for executing generic applications.
• General purpose operating systems are often quite non-deterministic in
behaviour.
• Their services can inject random delays into application software and may cause slow
responsiveness of an application at unexpected times.
• GPOS are usually deployed in computing systems where deterministic behaviour is not
an important criterion.
• Personal Computer/Desktop system is a typical example for a system where GPOSs are
deployed.
• Windows XP/MS-DOS etc. are examples for General Purpose Operating Systems.
Dept of ECE,GAT 30
Real-Time Operating System
(RTOS)
• 'Real-Time' implies deterministic timing behaviour.
• Deterministic timing behaviour in RTOS context means the OS services consumes only
known and expected amounts of time regardless the number of services.
• A Real-Time Operating System or RTOS implements policies and rules concerning
time-critical allocation of a system's resources.
• The RTOS decides which applications should run in which order and how much time needs
to be allocated for each application.
• Predictable performance is the hallmark of a well-designed RTOS.
• This is best achieved by the consistent application of policies and rules.
• Policies guide the design of an RTOS.
• Rules implement those policies and resolve policy conflicts.
• Windows CE, QNX, VxWorks, MicroC/OS-II, etc. are examples of Real-Time Operating
Systems (RTOS).
Dept of ECE,GAT 31
The Real-Time
•Kernel
The kernel of a Real-Time Operating System is referred as Real-Time kernel.
• The Real-Time kernel is highly specialised and it contains only the minimal set
of services required for running the user applications/tasks.
• The basic functions of a Real-Time kernel are:
• Task/Process Management
• Task/Process Scheduling
• Task/Process Synchronisation
• Error/Exception Handling
• Memory Management
• Interrupt Handling
• Time Management
Dept of ECE,GAT 32
The Real-Time Kernel
(continued)
• Task/Process Management
• Deals with
• setting up the memory space for the tasks
• loading the task's code into the memory space
• allocating system resources
• setting up a Task Control Block (TCB) for the task
• task/process termination/deletion
• A Task Control Block (TCB) is used for holding the information
corresponding to a task.
Dept of ECE,GAT 33
The Real-Time Kernel
(continued)
• TCB usually contains the following set of information:
• Task ID: Task Identification Number
• Task State: The current state of the task (e.g. State = 'Ready' for a task which is ready to execute)
• Task Type: Indicates what is the type for this task. The task can be a hard real time or soft real
time or background task.
• Task Priority: Task priority (e.g. Task priority = 1 for task with priority = 1)
• Task Context Pointer: Pointer for context saving
• Task Memory Pointers: Pointers to the code memory, data memory and stack memory for the
task
• Task System Resource Pointers: Pointers to system resources (semaphores, mutex, etc.) used
by
the task
• Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
• Other Parameters: Other relevant task parameters
Dept of ECE,GAT 34
The Real-Time Kernel
(continued)
• The parameters and implementation of the TCB is kernel dependent.
• The TCB parameters vary across different kernels, based on the task
management implementation.
• Task management service utilises the TCB of a task in the following
way:
• Creates a TCB for a task on creating a task
• Delete/remove the TCB of a task when the task is terminated or deleted
• Reads the TCB to get the state of a task
• Update the TCB with updated parameters on need basis (e.g. on a context
switch)
• Modify the TCB to change the priority of the task dynamically
Dept of ECE,GAT 35
The Real-Time Kernel
(continued)
• Task/Process Scheduling
• Deals with sharing the CPU among various tasks/processes.
• A kernel application called 'Scheduler' handles the task scheduling.
• Scheduler is nothing but an algorithm implementation, which performs the
efficient and optimum scheduling of tasks to provide a deterministic
behaviour.
• Task/Process Synchronisation
• Deals with synchronising the concurrent access of a resource, which is
shared across multiple tasks and the communication between various
tasks.
Dept of ECE,GAT 36
The Real-Time Kernel
(continued)
• Error/Exception Handling
• Deals with registering and handling the errors occurred/exceptions raised
during the execution of tasks.
• Insufficient memory, timeouts, deadlocks, deadline missing, bus error,
divide by zero, unknown instruction execution, etc. are examples of
errors/exceptions.
• Errors/Exceptions can happen at the kernel level services or at task
level.
• Deadlock is an example for kernel level exception, whereas timeout is an
example for a task level exception.
• The OS kernel gives the information about the error in the form of a system
call
(API).
• Watchdog timer is a mechanism for handling the timeouts for tasks.
Dept of ECE,GAT 37
The Real-Time Kernel
(continued)
• Memory Management
• RTOS makes use of 'block' based memory allocation technique, instead of
the usual dynamic memory allocation techniques used by the GPOS.
• RTOS kernel uses blocks of fixed size of dynamic memory and the block is
allocated for a task on a need basis.
• The blocks are stored in a 'Free Buffer Queue’.
• To achieve predictable timing and avoid the timing overheads, most of the
RTOS kernels allow tasks to access any of the memory blocks without any
memory protection.
• RTOS kernels assume that the whole design is proven correct and protection is
unnecessary.
• Some commercial RTOS kernels allow memory protection as optional.
Dept of ECE,GAT 38
The Real-Time Kernel
(continued)
• A few RTOS kernels implement Virtual Memory concept for memory
allocation if the system supports secondary memory storage (like HDD and
FLASH memory).
• In the 'block' based memory allocation, a block of fixed memory is always
allocated for tasks on need basis and it is taken as a unit.
• Hence, there will not be any memory fragmentation issues.
• The 'block' based memory allocation achieves deterministic behaviour with
the trade of limited choice of memory chunk size and suboptimal memory
usage.
Dept of ECE,GAT 39
The Real-Time Kernel
(continued)
• Interrupt Handling
• Deals with the handling of various types of interrupts.
• Interrupts provide Real-Time behaviour to systems.
• Interrupts inform the processor that an external device or an associated
task requires immediate attention of the CPU.
• Interrupts can be either Synchronous or Asynchronous.
• Synchronous interrupts:
• Occur in sync with the currently executing task.
• Usually the software interrupts fall under this category.
• Divide by zero, memory segmentation error, etc. are examples of synchronous
interrupts.
• For synchronous interrupts, the interrupt handler runs in the same context of
the interrupting task.
Dept of ECE,GAT 40
The Real-Time Kernel
(continued)
• Asynchronous interrupts:
• Occur at any point of execution of any task, and are not in sync with the currently
executing task.
• The interrupts generated by external devices (by asserting the interrupt line of the
processor/controller to which the interrupt line of the device is connected) connected to
the processor/controller, timer overflow interrupts, serial data reception/ transmission
interrupts, etc. are examples for asynchronous interrupts.
• For asynchronous interrupts, the interrupt handler is usually written as separate task
and it runs in a different context.
• Hence, a context switch happens while handling the asynchronous interrupts.
• Priority levels can be assigned to the interrupts and each interrupt can be
enabled or disabled individually.
• Most of the RTOS kernel implements 'Nested Interrupts' architecture.
• Interrupt nesting allows the pre-emption (interruption) of an Interrupt Service
Routine (ISR), servicing an interrupt, by a high priority interrupt.
Dept of ECE,GAT 41
The Real-Time Kernel
(continued)
• Time Management
• Accurate time management is essential for providing precise time
reference for all applications.
• The time reference to kernel is provided by a high-resolution Real-Time
Clock (RTC) hardware chip (hardware timer).
• The hardware timer is programmed to interrupt the processor/controller at
a fixed rate.
• This timer interrupt is referred as ‘Timer tick’ and is taken as the timing
reference by the kernel.
• The 'Timer tick' interval may vary depending on the hardware timer.
• Usually the 'Timer tick' varies in the microseconds range.
• The time parameters for tasks are expressed as the multiples of the ‘Timer
tick'.
Dept of ECE,GAT 42
The Real-Time Kernel
(continued)
• The System time is updated based on the 'Timer tick’.
• If the System time register is 32 bits wide and the 'Timer tick' interval
is 1 microsecond, the System time register will reset in
× 10 −6
232
232 × 10−6 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 = 𝐷𝑎𝑦𝑠 = ~0.0497 𝐷𝑎𝑦𝑠 = 1.19 𝐻𝑜𝑢𝑟𝑠
24 × 60 × 60
• If the ‘Timer tick' interval is 1 millisecond, the system time register
will reset in
232 × 10
−3
232 × 10−3 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 = 𝐷𝑎𝑦𝑠 = 49.7 𝐷𝑎𝑦𝑠 = ~50 𝐷𝑎𝑦𝑠
24 × 60 × 60
Dept of ECE,GAT 43
The Real-Time Kernel
(continued)
• The 'Timer tick' interrupt is handled by the 'Timer Interrupt' handler of kernel.
• The 'Timer tick' interrupt can be utilised for implementing the following actions:
• Save the current context (Context of the currently executing task).
• Increment the System time register by one. Generate timing error and reset the System time register if
the timer tick count is greater than the maximum range available for System time register.
• Update the timers implemented in kernel (Increment or decrement the timer registers for each timer
depending on the count direction setting for each register. Increment registers with count direction
setting = 'count up' and decrement registers with count direction setting = 'count down').
• Activate the periodic tasks, which are in the idle state.
• Invoke the scheduler and schedule the tasks again based on the scheduling algorithm.
• Delete all the terminated tasks and their associated data structures (TCBs).
• Load the context for the first task in the ready queue. Due to the re-scheduling, the ready task might be
changed to a new one from the task, which was preempted by the 'Timer Interrupt' task.
Dept of ECE,GAT 44
Hard Real-
•Time
Real-Time Operating Systems that strictly adhere to the timing
constraints for a task are referred as 'Hard Real-Time' systems.
• They must meet the deadlines for a task without any slippage.
• Missing any deadline may produce catastrophic results for Hard Real-
Time Systems,
including permanent data loss and irrecoverable damages to the
system/users.
• Hard Real-Time systems emphasise the principle ‘A late answer is a
wrong answer’.
• Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles are
typical examples for Hard Real-Time Systems.
• Any delay in the deployment of the air bags makes the life of the passengers under
threat.
Dept of ECE,GAT 45
Hard Real-Time
•(continued)
Hard Real-Time Systems does not implement the virtual memory
model for handling the memory.
• This eliminates the delay in swapping in and out the code
corresponding to the task to and from the primary memory.
• Most of the Hard Real-Time Systems are automatic and does not
contain a Human in the Loop (HITL).
• The presence of human in the loop for tasks introduces unexpected
delays in the task execution.
Dept of ECE,GAT 46
Soft Real-
Time
• Real-Time Operating Systems that do not guarantee meeting deadlines, but offer the best
effort to meet the deadline are referred as 'Soft Real-Time' systems.
• Missing deadlines for tasks are acceptable for a Soft Real-time system if the frequency of
deadline missing is within the compliance limit of the Quality of Service (QoS).
• A Soft Real-Time system emphasises the principle 'A late answer is an acceptable answer, but it
could have done bit faster’.
• Soft Real-Time systems most often have a human in the loop (HITL).
• Automated Teller Machine (ATM) is a typical example for Soft-Real-Time System.
• If the ATM takes a few seconds more than the ideal operation time, nothing fatal happens.
Dept of ECE,GAT 47
Tasks, Process and
Threads
Dept of ECE,GAT 48
Tas
•kThe term 'task' refers to something that needs to be done.
• In the operating system context, a task is defined as the program in
execution and the related information maintained by the operating
system for the program.
• Task is also known as 'Job' in the operating system context.
• A program or part of it in execution is also called a 'Process’.
• The terms 'Task', 'Job' and 'Process' refer to the same entity in the
operating system context and most often they are used interchangeably.
Dept of ECE,GAT 49
Process
• A 'Process' is a program, or part of it, in execution.
• Process is also known as an instance of a program in execution.
• Multiple instances of the same program can execute
simultaneously.
• A process requires various system resources like CPU for executing
the process; memory for storing the code corresponding to the
process and associated variables, I/O devices for information
exchange, etc.
• A process is sequential in execution.
Dept of ECE,GAT 50
The Structure of a
•Process
The concept of 'Process' leads to concurrent execution (pseudo
parallelism) of tasks and thereby the efficient utilisation of the CPU and
other system resources.
• Concurrent execution is achieved through the sharing of CPU among the
processes.
• A process mimics a processor in properties and holds a set of registers,
process status, a Program Counter (PC) to point to the next executable
instruction of the process, a stack for holding the local variables
associated with the process and the code corresponding to the process.
• This can be visualised as shown in the figure.
Dept of ECE,GAT 51
The Structure of a Process
(continued)
Process
Stack
(Stack Pointer)
Working Registers
Status Registers
Program Counter (PC)
Code memory
corresponding to the
Process
Dept of ECE,GAT 52
The Structure of a Process
•(continued)
A process which inherits all the properties of the CPU can be
considered as a virtual processor, awaiting its turn to have its
properties switched into the physical processor.
• When the process gets its turn, its registers and the program
counter register becomes mapped to the physical registers of the
CPU.
Dept of ECE,GAT 53
The Structure of a Process
•(continued)
From a memory perspective, the
memory occupied by the process
is segregated into three regions as
shown in the figure:
• Stack memory - holds all
temporary data such as variables
local to the process
• Data memory - holds all global
data
for the process
• Code memory - contains the
program code (instructions) Fig: Memory Organisation of a Process
corresponding to the process
Dept of ECE,GAT 54
Process States and State
•Transition
The process traverses through a series of states during its transition from
the newly created state to the terminated state.
• The cycle through which a process changes its state from 'newly created'
to 'execution completed' is known as 'Process Life Cycle’.
• The various states through which a process traverses through during a
Process Life Cycle indicates the current status of the process with respect
to time and also provides information on what it is allowed to do next.
• The transition of a process from one state to another is known as 'State
transition’.
• Figure represents the various states and state transitions associated with
a process.
Dept of ECE,GAT 55
Created
Ready
Blocked
Running
Completed
Fig: Process States and State Transition Representation
Dept of ECE,GAT 56
Process States and State Transition
(continued)
• The state at which a process is being created is referred as 'Created
State’.
• The Operating System recognises a process in the 'Created State' but no resources
are allocated to the process.
• The state, where a process is incepted into the memory and awaiting the
processor time for execution, is known as 'Ready State’.
• At this stage, the process is placed in the 'Ready list' queue maintained by the OS.
• The state where in the source code instructions corresponding to the
process is being executed is called 'Running State’.
• Running State is the state at which the process execution happens.
Dept of ECE,GAT 57
Process States and State Transition
(continued)
• 'Blocked State/Wait State' refers to a state where a running process
is temporarily suspended from execution and does not have
immediate access to resources.
• The blocked state might be invoked by various conditions like:
• the process enters a wait state for an event to occur (e.g. Waiting for user inputs
such as keyboard input) or
• waiting for getting access to a shared resource
Dept of ECE,GAT 58
Process Management
• Process management deals with
• creation of a process
• setting up the memory space for the process
• loading the process's code into the memory space
• allocating system resources
• setting up a Process Control Block (PCB) for the process
• process termination/deletion
Dept of ECE,GAT 59
Threads
• A thread is the primitive that can execute code.
• A thread is a single sequential flow of control within a process.
• 'Thread' is also known as light-weight process.
• A process can have many threads of execution.
• Different threads, which are part of a process, share the same address space;
meaning they share the data memory, code memory and heap memory area.
• Threads maintain their own thread status (CPU register values), Program
Counter (PC) and stack.
• The memory model for a process and its associated threads are given in the
figure.
Dept of ECE,GAT 60
Threads (continued)
Dept of ECE,GAT 61
The Concept of
•Multithreading
A process/task in embedded application may be a complex or
lengthy one and it may contain various suboperations like getting
input from I/O devices connected to the processor, performing
some internal calculations/operations, updating some I/O devices
etc.
• If all the subfunctions of a task are executed in sequence, the CPU
utilisation may not be efficient.
• For example, if the process is waiting for a user input, the CPU enters
the wait state for the event, and the process execution also enters a
wait state.
Dept of ECE,GAT 62
The Concept of Multithreading
(continued)
• Instead of this single sequential execution of the whole process, if the
task/process is split into different threads carrying out the different
subfunctionalities of the process, the CPU can be effectively utilised and when
the thread corresponding to the I/O operation enters the wait state, another
threads which do not require the I/O event for their operation can be switched
into execution.
• This leads to more speedy execution of the process and the efficient utilisation of
the processor time and resources.
• If the process is split into multiple threads, which executes a portion of the
process, there will be a main thread and rest of the threads will be created
within the main thread.
• The multithreaded architecture of a process can be better visualised with
the
thread-process diagram, shown in the figure.
Dept of ECE,GAT 63
Task/Process
Code Memory
Data Memory
Stack Stack Stack
Registers Registers Registers
Thread 1 Thread 2 Thread 3
//create child
thread 2
CreateThread (NULL,
1000,(LPTHREAD_START
_ROUTINE)ChildThread
2, NULL, 0,
&dwThreadID);
}
Dept of ECE,GAT 64
The Concept of Multithreading
(continued)
• Use of multiple threads to execute a process brings the following
advantages:
• Better memory utilisation
• Multiple threads of the same process share the address space for data memory.
• This also reduces the complexity of inter thread communication since variables can be
shared across the threads.
• Speedy execution of the process
• Since the process is split into different threads, when one thread enters a wait state, the
CPU can be utilised by other threads of the process that do not require the event, which
the other thread is waiting, for processing.
• Efficient CPU utilisation
• The CPU is engaged all time.
Dept of ECE,GAT 65
Thread Standards
• Thread standards deal with the different standards available for
thread creation and management.
• These standards are utilised by the operating systems for thread
creation and thread management.
• It is a set of thread class libraries.
• The commonly available thread class libraries are:
• POSIX Threads
• Win32 Threads
• Java Threads
Dept of ECE,GAT 66
POSIX
•Threads
POSIX stands for Portable Operating System Interface.
• The POSIX.4 standard deals with the Real-Time extensions and
POSIX.4a standard deals with thread extensions.
• The POSIX standard library for thread creation and management is
'Pthreads’.
• 'Pthreads' library defines the set of POSIX thread creation and
management functions in 'C' language.
Dept of ECE,GAT 67
POSIX Threads
(continued)
int pthread_create(pthread_t *new_thread_ID, const pthread_attr_t, *attribute,
void * (*start_function) (void *), void *arguments);
• This primitive creates a new thread for running the function start_function.
• Here pthread_t is the handle to the newly created thread and pthread_attr_t is
the data type for holding the thread attributes.
• 'start_function' is the function the thread is going to execute and arguments is
the arguments for 'start_function’.
• On successful creation of a Pthread, pthread_create() associates the Thread
Control Block (TCB) corresponding to the newly created thread to the variable
of type pthread_t (new_thread_ID in our example).
Dept of ECE,GAT 68
POSIX Threads
(continued)
int pthread_join(pthread_t new_thread, void * *thread_status);
• This primitive blocks the current thread and waits until the
completion of the thread pointed by it (new_thread in this example).
• All the POSIX 'thread calls' returns an integer.
• A return value of zero indicates the success of the call.
Dept of ECE,GAT 69
POSIX Threads -
Example
• Write a multithreaded application to print 'Hello I'm in main thread" from the
main thread and "Hello I'm in new thread" 5 times each, using the
pthread_create() and pthread_join() POSIX primitives.
//Assumes the application is running on an OS where POSIX library is available
#include<pthread.h>
#include<stdlib.h>
#include<stdio.h>
//
*******************
*******************
*******************
*******************
*
//New thread
function for
printing “Hello I’m
in new thread”
void
*new_thread(void
*thread_args)
{
Dept of ECE,GAT 70
int i,j;
//*****************************************************************************
//Start of main thread
int main (void)
{
int i,j;
pthread_t tcb;
//Create the
new thread for
executing
new_thread
function
if
(pthread_creat
e(&tcb, NULL,
new_thread,
printf(“Hello I’m in main thread\n”);
NULL))
{for(i=0; i<10000; i++); //Wait for some time. Do nothing.
//New thread creation failed
printf(“Error in creating new thread\n”);
return -1;
}
for(j=0; j<5; j++)
{
}
if
Dept (pthread_join(tcb, NULL))
of ECE,GAT 71
POSIX Threads
•(continued)
The termination of a thread can happen in different ways:
• Natural termination:
• The thread completes its execution and returns to the main thread through a
simple return or by executing the pthread_exit() call.
• Forced termination:
• This can be achieved by the call pthread_cancel() or through the termination of
the main thread with exit or exec functions.
• pthread_cancel() call is used by a thread to terminate another thread.
Dept of ECE,GAT 72
Thread Pre-emption
• Thread pre-emption is the act of pre-empting the currently running
thread.
• It means, stopping the currently running thread temporarily.
• Thread pre-emption is performed for sharing the CPU time among
all the threads.
• The execution switching among threads is known as 'Thread
context
switching’.
• Thread context switching is dependent on the Operating system's
scheduler and the type of the thread.
Dept of ECE,GAT 73
Types of
•Threads
User Level Threads
• User level threads do not have kernel/Operating System support and they exist
solely in the running process.
• Even if a process contains multiple user level threads, the OS treats it as single
thread and will not switch the execution among the different threads of it.
• It is the responsibility of the process to schedule each thread as and when required.
• In summary, user level threads of a process are non-preemptive at thread level
from OS perspective.
• The execution switching (thread context switching) happens only when the
currently executing user level thread is voluntarily blocked.
• Hence, no OS intervention and system calls are involved in the context switching of user level
threads.
• This makes context switching of user level threads very fast.
Dept of ECE,GAT 74
Types of Threads
•(continued)
Kernel Level Threads
• Kernel level threads are individual units of execution, which the OS treats as
separate threads.
• The OS interrupts the execution of the currently running kernel thread and
switches the execution to another kernel thread based on the scheduling
policies implemented by the OS.
• In summary, kernel level threads are pre-emptive.
• Kernel level threads involve lots of kernel overhead and involve system
calls
for context switching.
• However, kernel threads maintain a clear layer of abstraction and allow
threads to use system calls independently.
Dept of ECE,GAT 75
Thread Binding
•Models
There are many ways for binding user level threads with system/kernel
level threads.
• Many-to-One Model
• Here, many user level threads are mapped to a single kernel thread.
• In this model, the kernel treats all user level threads as single thread and the
execution switching among the user level threads happens when a currently
executing user level thread voluntarily blocks itself or relinquishes the CPU.
• Solaris Green threads and GNU Portable Threads are examples for this.
• The 'PThread’ example is an illustrative example for application with Many-
to-One thread model.
Dept of ECE,GAT 76
Thread Binding Models
•(continued)
One-to-One Model
• Here, each user level thread is bonded to a kernel/system level thread.
• Windows XP/NT/2000 and Linux threads are examples for One-to-One
thread models.
• The modified 'PThread' example is an illustrative example for application
with One-to-One thread model.
• Many-to-Many Model
• In this model, many user level threads are allowed to be mapped to many
kernel threads.
• Windows NT/2000 with ThreadFibre package is an example for this.
Dept of ECE,GAT 77
Thread vs.
Process
Thread Process
Thread is a single unit of execution and is part of process. Process is a program in execution and contains one or
more threads.
A thread does not have its own data memory and heap Process has its own code memory, data memory and stack
memory. It shares the data memory and heap memory memory.
with other threads of the same process.
A thread cannot live independently; it lives within the A process contains at least one thread.
process.
There can be multiple threads in a process. The first Threads within a process share the code, data and heap
thread (main thread) calls the main function and occupies memory. Each thread holds separate memory area for
the start of the stack memory of the process. stack (share the total stack memory of the process).
Threads are very inexpensive to create. Processes are very expensive to create. Involves many OS
overhead.
Context switching is inexpensive and fast. Context switching is complex and involves lot of OS
overhead and is comparatively slower.
If a thread expires, its stack is reclaimed by the process. If a process dies, the resources allocated to it are
reclaimed by the OS and all the associated threads of the
process also die.
Dept of ECE,GAT 78
Task Scheduling
Dept of ECE,GAT 79
Task
Scheduling
• Multitasking involves the execution switching among the different tasks.
• There should be some mechanism in place to share the CPU among the different tasks
and to decide which process/task is to be executed at a given point of time.
• Determining which task/process is to be executed at a given point of time is known as
task/process scheduling.
• Scheduling policies forms the guidelines for determining which task is to be executed
when.
• The scheduling policies are implemented in an algorithm and it is run by the kernel as
a service.
• The kernel service/application, which implements the scheduling algorithm, is known
as 'Scheduler'.
Dept of ECE,GAT 80
Task
•Scheduling
Based on the scheduling algorithm used, scheduling can be
classified into:
co operative multitasking
• Non-preemptive Scheduling
• The currently executing task/process is allowed to run until it
terminates or
enters the ‘Wait’ state waiting for an I/O or system resource.
• Preemptive Scheduling
• The currently executing task/process is preempted (stopped
temporarily)
and another task from the Ready queue is selected for execution.
Dept of ECE,GAT 81
Task Scheduling
•(continued)
The process scheduling decision may take place when a process
switches its state to
1. 'Ready' state from 'Running' state
2. 'Blocked/Wait' state from 'Running' state
3. 'Ready' state from 'Blocked/Wait' state
4. 'Completed' state
• A process switches to 'Ready' state from the 'Running' state when it
is preempted.
• Hence, the type of scheduling in scenario 1 is pre-emptive.
Dept of ECE,GAT 82
Task Scheduling
•(continued)
When a high priority process in the 'Blocked/Wait' state completes its I/O and
switches to the 'Ready' state, the scheduler picks it for execution if the
scheduling policy used is priority based preemptive.
• This is indicated by scenario 3.
• In preemptive/non-preemptive multitasking, the process relinquishes the CPU
when it enters the ‘Blocked/Wait' state or the 'Completed' state and switching
of the CPU happens at this stage.
• Scheduling under scenario 2 can be either preemptive or non-preemptive.
• Scheduling under scenario 4 can be preemptive, non-preemptive or co-
operative.
Dept of ECE,GAT 83
Task Scheduling
(continued)
• The selection of a scheduling criterion/algorithm should consider the
following factors:
• CPU Utilisation:
• The scheduling algorithm should always make the CPU utilisation high.
• CPU utilisation is a direct measure of how much percentage of the CPU is being utilised.
• Throughput:
• This gives an indication of the number of processes executed per unit of time.
• The throughput for a good scheduler should always be higher.
• Turnaround Time (TAT):
• It is the amount of time taken by a process for completing its execution.
• It includes the time spent by the process for waiting for the main memory, time spent in the
ready queue, time spent on completing the I/O operations, and the time spent in execution.
• The turnaround time should be minimal for a good scheduling algorithm.
Dept of ECE,GAT 84
Task Scheduling
(continued)
• Waiting Time:
• It is the amount of time spent by a process in the 'Ready' queue waiting to get the CPU
time for execution.
• The waiting time should be minimal for a good scheduling algorithm.
• Response Time:
• It is the time elapsed between the submission of a process and the first response.
• For a good scheduling algorithm, the response time should be as least as possible.
To summarise, a good scheduling algorithm has high CPU utilisation, minimum Turn Around
Time (TAT), maximum throughput and least response time.
Dept of ECE,GAT 85
Task Scheduling
•(continued)
The various queues maintained by OS in association with CPU
scheduling are:
• Job Queue:
• Contains all the processes in the system.
• Ready Queue:
• Contains all the processes, which are ready for execution and waiting for CPU to
get their turn for execution.
• The Ready queue is empty when there is no process ready for running.
• Device Queue:
• Contains the set of processes, which are waiting for an I/O device.
Dept of ECE,GAT 86
Preemptive
•Scheduling
In preemptive scheduling, the scheduler can preempt (stop temporarily) the
currently executing task/process and select another task from the 'Ready'
queue for execution.
• Every task in the 'Ready' queue gets a chance to execute.
• When to pre-empt a task and which task is to be picked up from the 'Ready'
queue for execution after preempting the current task is purely dependent on
the scheduling algorithm.
• A task which is preempted by the scheduler is moved to the 'Ready' queue.
• The act of moving a 'Running' process/task into the 'Ready' queue by the
scheduler, without the processes requesting for it is known as ‘Preemption’
Dept of ECE,GAT 87
Preemptive Scheduling
•Techniques
Preemptive scheduling can be implemented in different
approaches.
• Time-based preemption
• Priority-based preemption
• The various types of preemptive scheduling adopted in
task/process scheduling are:
• Preemptive Shortest Job First (SJF)/Shortest Remaining Time
(SRT)
Scheduling
• Round Robin (RR) Scheduling
• Priority Based Scheduling
Dept of ECE,GAT 88
Preemptive Shortest Job First (SJF)/Shortest
Remaining Time (SRT) Scheduling
• In SJF, the process with the shortest estimated run time is scheduled first, followed by
the next shortest process, and so on.
• The preemptive SJF scheduling algorithm sorts the 'Ready' queue when a new process
enters the 'Ready' queue and checks whether the execution time of the new process
is shorter than the remaining of the total estimated time for the currently executing
process.
• If the execution time of the new process is less, the currently executing process is
preempted and the new process is scheduled for execution.
• Thus preemptive SJF scheduling always compares the execution completion time (It is
same as the remaining time for the new process) of a new process entered the 'Ready'
queue with the remaining time for completion of the currently executing process and
schedules the process with shortest remaining time for execution.
• Preemptive SJF scheduling is also known as Shortest Remaining Time (SRT) scheduling .
Dept of ECE,GAT 89
Preemptive SJF/SRT Scheduling -
•Example
Three processes with process IDs P1, P2, P3 with estimated completion time
10, 5, 7 milliseconds respectively enter the ready queue together. A new
process P4 with estimated completion time 2 ms enters the 'Ready' queue
after 2 ms. Assume all the processes contain only CPU operation and no I/O
operations are involved. Calculate the waiting time and Turn Around Time (TAT)
for each process and the average waiting time and Turn Around Time in the
SRT scheduling.
Dept of ECE,GAT 90
Preemptive SJF/SRT Scheduling
–
Example (continued)
Process ID
Remaining
Time
Process ID
Remaining
Time
Process ID
Remaining
Time
Process ID
Remaining
Time
Process ID
Remaining
Time
P1 10 ms P1 10 ms P1 10 ms P1 10 ms P1 10 ms
P2 5 ms P2 3 ms P2 3 ms P3 7 ms
P3 7 ms P3 7 ms P3 7 ms
P4 2 ms
‘Ready’ queue at 0 ms ‘Ready’ queue at 2 ms ‘Ready’ queue at 4 ms ‘Ready’ queue at 7 ms ‘Ready’ queue at 14 ms
P2 is preempted P4 is completed P2 is completed P3 is completed
P2 is scheduled
P4 is scheduled P2 is scheduled P3 is scheduled P1 is scheduled
Dept of ECE,GAT 91
Preemptive SJF/SRT Scheduling
–
•Example (continued)
The execution sequence can be written as below:
P2 P4 P2 P3 P1
Time (ms)
Dept of ECE,GAT 92
Preemptive SJF/SRT Scheduling
–
•Example
The waiting time (continued)
for all the processes are given as
• 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑓𝑜𝑟 𝑃2 = 0 𝑚𝑠 + 4 − 2 𝑚𝑠 = 2 𝑚𝑠
• 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑓𝑜𝑟 𝑃4 = 0 𝑚𝑠
• 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑓𝑜𝑟 𝑃3 = 7 𝑚𝑠
• 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒 𝑓𝑜𝑟 𝑃1 = 14 𝑚𝑠
2+0+7+14 23
= 𝑚𝑠 = 𝑚𝑠
4
4
Dept of ECE,GAT 93
Preemptive SJF/SRT Scheduling
–
•Example
𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒(continued)
(𝑇𝐴𝑇) = 𝑇𝑖𝑚𝑒 𝑠𝑝𝑒𝑛𝑡 𝑖𝑛 𝑟𝑒𝑎𝑑𝑦 𝑞𝑢𝑒𝑢𝑒 + 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛
𝑇𝑖𝑚𝑒
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃2 = 2 𝑚𝑠 + 5 𝑚𝑠 = 7 𝑚𝑠
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃4 = 0 𝑚𝑠 + 2 𝑚𝑠 = 2 𝑚𝑠
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃3 = 7 𝑚𝑠 + 7 𝑚𝑠 = 14 𝑚𝑠
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃1 = 14 𝑚𝑠 + 10 𝑚𝑠 = 24 𝑚𝑠
Dept of ECE,GAT
= 11.75 𝑚𝑠 94
Round Robin (RR)
Scheduling
• In Round Robin scheduling, each process in the 'Ready' queue is executed for a
pre-defined time slot.
• 'Round Robin' brings the message "Equal chance to all".
• The execution starts with picking up the first process in the 'Ready' queue.
• It is executed for a pre-defined time and when the pre-defined time elapses or
the process completes (before the pre-defined time slice), the next process in
the 'Ready' queue is selected for execution.
• This is repeated for all the processes in the 'Ready' queue.
• Once each process in the 'Ready' queue is executed for the pre-defined time
period, the scheduler comes back and picks the first process in the 'Ready'
queue again for execution.
• The sequence is repeated.
Dept of ECE,GAT 95
Round Robin (RR) Scheduling
•(continued)
The 'Ready' queue can be
Process 1
considered as a circular queue
in which the scheduler picks up Execution Switch Execution Switch
process.
Dept of ECE,GAT 96
Round Robin (RR) Scheduling
(continued)
• The time slice is provided by the timer tick feature of the time
management unit of the OS kernel.
• Time slice is kernel dependent and it varies in the order of a few
microseconds to milliseconds.
• Round Robin scheduling ensures that every process gets a fixed amount
of-CPU time for execution.
• When the process gets its fixed time for execution is determined by the
First Come First Serve (FCFS) policy.
• If a process terminates before the elapse of the time slice, the process
releases the CPU voluntarily and the next process in the queue is
scheduled for execution by the scheduler.
Dept of ECE,GAT 97
Round Robin (RR) Scheduling -
•Example
Three processes with process IDs P1, P2, P3 with estimated completion time 6,
4, 2 milliseconds respectively, enter the ready queue together in the order P1,
P2, P3. Calculate the waiting time and Turn Around Time (TAT) for each process
and the Average waiting time and Turn Around Time (Assuming there is no I/O
waiting for the processes) in RR algorithm with Time slice = 2 ms.
Dept of ECE,GAT 98
Remaining Remaining Remaining Remaining
Process ID Process ID Process ID Process ID
Time Time Time Time
P1 6 ms P2 4 ms P3 2 ms P1 4 ms
P2 4 ms P3 2 ms P1 4 ms P2 2 ms
P3 2 ms P1 4 ms P2 2 ms
Remaining Remaining
Process ID Process ID
Time Time
P2 2 ms P1 2 ms
P1 2 ms
P1 P2 P3 P1 P2 P1
Time (ms)
6+6+4 16
= 𝑚𝑠 = 𝑚𝑠
3 3
Dept of ECE,GAT
= 5.33 𝑚𝑠 101
Round Robin (RR) Scheduling –
Example (continued)
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 (𝑇𝐴𝑇) = 𝑇𝑖𝑚𝑒 𝑠𝑝𝑒𝑛𝑡 𝑖𝑛 𝑟𝑒𝑎𝑑𝑦 𝑞𝑢𝑒𝑢𝑒 + 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛
𝑇𝑖𝑚𝑒
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃1 = 6 𝑚𝑠 + 6 𝑚𝑠 = 12 𝑚𝑠
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃2 = 6 𝑚𝑠 + 4 𝑚𝑠 = 10 𝑚𝑠
• 𝑇𝑢𝑟𝑛 𝐴𝑟𝑜𝑢𝑛𝑑 𝑇𝑖𝑚𝑒 𝑇𝐴𝑇 𝑓𝑜𝑟 𝑃3 = 4 𝑚𝑠 + 2 𝑚𝑠 = 6 𝑚𝑠
= 9.33 𝑚𝑠
Dept of ECE,GAT 102
Priority Based
Scheduling
• The Priority Based Preemptive Scheduling ensures that a process with high priority is
serviced at the earliest compared to other low priority processes in the ‘Ready’ queue.
• Any high priority process entering the 'Ready' queue is immediately scheduled for
execution.
• The priority of a task/process can be indicated through various mechanisms.
• While creating the process/task, the priority can be assigned to it.
• The priority number associated with a task/process is the direct indication of its
priority.
• The priority number 0 indicates the highest priority.
• This convention need not be universal and it depends on the kernel level
implementation of the priority structure.
• Whenever a new process enters the ‘Ready’ queue, the scheduler sorts the 'Ready'
queue based on priority and picks the process with the highest level of priority for
execution.
Dept of ECE,GAT 103
Priority Based Scheduling -
•Example
Three processes with process IDs P1, P2, P3 with estimated completion time
10, 5, 7 milliseconds and priorities 1, 3, 2 (0 – highest priority, 3 - lowest
priority) respectively enter the ready queue together. A new process P4 with
estimated completion time 6 ms and priority 0 enters the 'Ready' queue after 5
ms of start of execution of P1. Calculate the waiting time and Turn Around
Time (TAT) for each process and the Average waiting time and Turn Around
Time (Assuming there is no I/O waiting for the processes) in priority based
scheduling algorithm.
0 P4 6 ms
P1 is preempted P4 is completed
P1 is scheduled
P4 is scheduled P1 is
scheduled
Remaining Remaining
Priority Process ID Priority Process ID
Time Time
3 P2 5 ms 3 P2 5 ms
2 P3 7 ms
P1 P4 P1 P3 P2
Time (ms)
6+0+16+23
=
Dept of ECE,GAT
= 18.25 𝑚𝑠 108
Task Communication
Message Queue
Process 1 Process 2
Post message
Mailbox
Broadcast Broadcast
Fig.: Concept of message Broadcast message
Mailbox based message
indirect messaging
for IPC
Task Task Task
2 3 4
• Both the processes Process A and Process B contain the program statement
counter++;
Process A Process B
mov eax, dword ptr [ebp-4] mov eax, dword ptr [ebp-4]
add eax, 1 add eax, 1
mov dword ptr [ebp-4], eax mov dword ptr [ebp-4], eax
Shared Memory
(Critical
Section)
Shared Memory
(Critical
Section)
environment.
• A circuit for emulating target device remains independent of a particular target
system
and processor.
• The emulator emulates the target system with extended memory and with code
downloading ability during the edit-test-debug cycles.
• Emulators maintain the original look, feel, and behaviour of the embedded system.
• Even though the cost of developing an emulator is high, it proves to be the more cost
efficient solution over time.
• Emulators allow software exclusive to one system to be used on another.
• It is more difficult to design emulators and it also requires better hardware than the
original system.
Dept of ECE,GAT 215
Simulator vs.
•Emulator
Simulator is a software application that • Emulator is a self-contained hardware
precisely duplicates (mimics) the target device which emulates the target CPU.
CPU and simulates the various features • The emulator hardware contains
and instructions supported by the target necessary emulation logic and it is
CPU. hooked to the debugging application
• The simulator is a host-based program running on the development PC on one
that imitates the functionality and end and connects to the target board
instruction set of the target processor. through some interface on the other
• In summary, the simulator 'simulates' end.
the target board CPU. • In summary, the emulator 'emulates' the
target board CPU.