Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
6 views24 pages

Unit 3 Threads

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 24

Threads

What is a Thread?
A thread is a path of execution within a
process. A process can contain multiple
threads.
A thread is also known as lightweight
process.
 The idea is to achieve parallelism by dividing
a process into multiple threads.
 For example, in a browser, multiple tabs can
be different threads. MS Word uses multiple
threads: one thread to format the text,
another thread to process inputs, etc.
Thread is an execution unit which consists of its own
program counter, a stack, and a set of registers.
Program counter keeps track of which instruction to
execute next, system registers which hold its current
working variables, and a stack which contains the
execution history.
Multithreading
A thread is a path which is followed during a
program’s execution.
Lets say, for example a program is not
capable of reading keystrokes while making
drawings. These tasks cannot be executed by
the program at the same time. This problem
can be solved through multitasking so that
two or more tasks can be executed
simultaneously.
Multitasking is of two types: Processor based and thread
based.
Processor based multitasking is totally managed by the
OS, however multitasking through multithreading can be
controlled by the programmer to some extent.
The concept of multi-threading needs proper
understanding of these two terms – a process and a
thread.
A process is a program being executed. A process can be
further divided into independent units known as threads.
A thread is like a small light-weight process within a
process Or we can say a collection of threads is what is
known as a process.
Process vs Thread?
The process has its own Process Thread has Parents’ PCB, its own
Control Block, Stack, and Address Thread Control Block, and Stack and
Space. common Address space.

Since all threads of the same process


share address space and other
Changes to the parent process do not
resources so any changes to the main
affect child processes.
thread may affect the behavior of the
other threads of the process.

No system call is involved, it is


A system call is involved in it.
created using APIs.

The process does not share data with


Threads share data with each other.
each other.
Types of Threads
Threads are implemented in following two ways

User Level Threads −
 User managed threads.

 These are the threads that application programmers use

in their programs.
Kernel Level Threads −
 Operating System managed threads acting on kernel.
 Kernel threads are supported within the kernel of the

OS itself. All modern OSs support kernel level threads,


allowing the kernel to perform multiple simultaneous
tasks or to service multiple kernel system calls
simultaneously.
Multithreading Models
The user threads must be mapped to kernel
threads, by one of the following strategies:
Many to One Model
One to One Model
Many to Many Model
Many to One Model
In the many to one model, many user-level
threads are all mapped onto a single kernel
thread.
Thread management is handled by the thread
library in user space, which is efficient in
nature.
One to One Model
The one to one model creates a separate
kernel thread to handle each and every user
thread.
Most implementations of this model place a
limit on how many threads can be created.
Linux and Windows from 95 to XP implement
the one-to-one model for threads.
Many to Many Model
The many to many model multiplexes any
number of user threads onto an equal or
smaller number of kernel threads, combining
the best features of the one-to-one and many-
to-one models.
Users can create any number of the threads.
Processes can be split across multiple
processors.
Benefits of Multithreading
 1. Responsiveness: If the process is divided into multiple
threads, if one thread completes its execution, then its output
can be immediately returned.
 2. Faster context switch: Context switch time between
threads is lower compared to process context switch.
 3. Effective utilization of multiprocessor system: If we
have multiple threads in a single process, then we can
schedule multiple threads on multiple processor. This will
make process execution faster.
 4. Resource sharing: Resources like code, data, and files can
be shared among all threads within a process.
 5. Communication: Communication between multiple threads
is easier, as the threads shares common address space.
 6. Enhanced throughput of the system: If a process is
divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed per
unit of time is increased, thus increasing the throughput of the
Thread Scheduling
In an OS , a thread refers to the smallest
execution within a process.
Thread Scheduling can be defined as the
process of determining the order and timing
of execution for individual threads within a
system.
Each thread represent a sequence of
instructions that needs to be executed,and
the scheduler is responsible for making
decisions about which thread should run next
and for how long.
Real –Time OS
A real-time operating system (RTOS) is intended to serve real-
time applications that process data without delays.

a) soft real-time systems b) hard real-time systems

a) A hard real-time system considers timelines as a deadline, and it


should not be omitted in any circumstances.

b) A soft real-time system is a system whose operation is degraded


if results are not produced according to the specified timing
requirement.

(The meeting of deadline is not compulsory for every task)


Priority based real-time scheduling

 priority-based scheduling algorithms assign each process a


priority based on its importance;

 more important tasks are assigned higher priorities than


less important.

 If the scheduler also supports preemption, a process


currently running on the CPU will be preempted if a
higher-priority process becomes available to run.

A preemptive, priority-based scheduler only


guarantees
soft real-time functionality.
Priority based real-time scheduling

Example

Linux, Windows operating systems assign


real-time processes the highest scheduling
priority.

For example Windows has 32 different


priority levels. The highest levels—priority
values 16 to 31—are reserved for real-time
processes.
Priority based scheduling for hard
real-time

Hard real-time systems must guarantee that real-time


tasks will be serviced in accord with their deadline
requirements.

Characteristics of the processes that are to be


scheduled:

1.The processes are considered periodic.

i.e They require the CPU at constant intervals


(periods).
Rate-Monotonic Scheduling

 The rate-monotonic scheduling algorithm


schedules periodic tasks using a static priority
policy with preemption.

 rate-monotonic scheduling (RMS) is a priority


assignment algorithm used in real-time operating
systems (RTOS) with a static-priority scheduling
class.

 The static priorities are assigned according to the


cycle duration of the job, so a shorter cycle
duration results in a higher job priority
Rate-Monotonic Scheduling
• The task’s period, T, is the amount of time between the
arrival of one instance of
the task and the arrival of the next instance of the task.

• A task’s rate is simply the inverse of its period (in seconds).

• The execution time, C, is the amount of processing time


required for each
Occurrence of the task.
Rate-Monotonic Scheduling

rate-monotonic scheduling assumes that the processing time of


a periodic process is the same for each CPU burst.

For example
Consider P1, P2 with time period 50,100 resp. and B.T
20,35 resp.
Cal. CPU utilization of each process and total CPU
utilization.
Sol:
CPU utilization=(Burst time/Time period)= (Ti/Pi)
For P1: (20/50)=0.40 i.e 40%
For P2: (35/100)=0.35 i.e 35%
Total CPU utilization is 75%
Earliest Deadline First Scheduling

Earliest-deadline-first (EDF) scheduling dynamically


assigns priorities according to deadline.

The earlier the deadline, the higher the priority the later
the
deadline, the lower the priority.

Under the EDF policy, when a process becomes runnable,


it must announce its deadline requirements to the
system. Priorities
may have to be adjusted to reflect the deadline of the
newly runnable process.

You might also like