Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Ch03-Multithread Programming

Chapter 03 discusses multithreaded programming, explaining the concept of threads as lightweight processes that enhance application performance by allowing concurrent execution. It covers the components of threads, types of threads (user-level and kernel-level), advantages and disadvantages of multithreading, threading issues, and thread scheduling. Additionally, it highlights various thread libraries and their functionalities across different operating systems.

Uploaded by

soumyachandu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Ch03-Multithread Programming

Chapter 03 discusses multithreaded programming, explaining the concept of threads as lightweight processes that enhance application performance by allowing concurrent execution. It covers the components of threads, types of threads (user-level and kernel-level), advantages and disadvantages of multithreading, threading issues, and thread scheduling. Additionally, it highlights various thread libraries and their functionalities across different operating systems.

Uploaded by

soumyachandu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Chapter 03:

Multithreaded Programming
Introduction
• Thread is a sequential flow of tasks within a
process.
• Threads are used to increase the performance of
the applications.
• Each thread has its own program counter, stack, and
set of registers.
• Threads are also termed lightweight processes as
they share common resources.
• Eg: While playing a movie on a device the audio and
video are controlled by different threads in the
background.
• The below diagram shows the difference between a single-
threaded process and a multithreaded process and the
resources that are shared among threads in a multithreaded
process.
• Components of Thread
A thread has the following three components:
1.Program Counter: Tracks the current instruction
being executed by the thread.
2.Register Set:- Hold temporary data and
intermediate results for the thread’s execution.
3.Stack space:- Tracks the current instruction being
executed by the thread.
• Why do We Need Threads?
executed by the thread.

• Threads in the operating system provide multiple


benefits and improve the overall performance of the
system.
• Some of the reasons threads are needed in the
operating system are:
• Since threads use the same data and code, the
operational cost between threads is low.
• Creating and terminating a thread is faster
compared to creating or terminating a process.
• Context switching is faster in threads compared to
processes.
Types of Thread
1. User Level Thread:
• User-level threads are implemented and managed by the
user and the kernel is not aware of it.
• User-level threads are implemented using user-level
libraries and the OS does not recognize these threads.
• User-level thread is faster to create and manage
compared to kernel-level thread.
• Context switching in user-level threads is faster.
• If one user-level thread performs a blocking operation
then the entire process gets blocked.
• Eg: POSIX threads, Java threads, etc.
2. Kernel level Thread:
• Kernel level threads are implemented and managed
by the OS.
• Kernel level threads are implemented using system
calls and Kernel level threads are recognized by the
OS.
• Kernel-level threads are slower to create and
manage compared to user-level threads.
• Context switching in a kernel-level thread is slower.
• Even if one kernel-level thread performs a blocking
operation, it does not affect other threads.
Eg: Window Solaris.
• The below diagram shows the functioning of user-
level threads in userspace and kernel-level threads in
kernel space.
Advantages of Threading
• Threads improve the overall performance of a program.
• Threads increases the responsiveness of the program
• Context Switching time in threads is faster.
• Threads share the same memory and resources within
a process.
• Communication is faster in threads.
• Threads provide concurrency within a process.
• Enhanced throughput of the system.
• Since different threads can run parallelly, threading
enables the utilization of the multiprocessor
architecture to a greater extent and increases
efficiency.
Multithreading
• A multithreading operating system is one that allows
multiple threads to execute concurrently within a
single process.
• The term multithreading means a process and a
thread.
• Process means a program that is being executed.
• Processes are further divided into independent units
also known as threads, also known as collections of
threads.
• It is a process that is small and light-weighted
residing inside a process.
How does multithreading work in OS?
• Multithreading divides the task of application into separate
individual threads.
• The same process or tasks in multithreading can be done by
several threads or it can be said that more than one thread is
used for performing the tasks in multithreading.
• It has a segment that divides the code into a small set of tasks
that are lightweight and give less load to CPU memory.
• The threads in this are divided into user-level and kernel-level
threads.
• The user-level thread is used for handling an independent
form of the kernel-level thread without any support from the
kernel.
• On the other hand, the kernel-level threads are directly
managed by the operating system.
Multithreading Models in OS
• There are 3 types of models in multithreading.
1. Many to many relationships.
• In this, the OS multiplexes 'n' number of threads of
user onto kernel thread of equal or small numbers.
The diagram represents the many-to-many
relationship thread model.
– In this, 6 user-level threads are multiplexed with 6 kernel-level
threads. The developers using this thread can create as many user
threads as possible and the corresponding kernel thread can also run
parallel on a multiprocessor machine. This model provides the
best accuracy and concurrency on a thread.
Many-to-one relationship.
• This model maps many user-level threads to one kernel-level
thread.
• By using a thread library, the management of the thread can
be done in the user space.
• When the blocking system call is made by a thread, the entire
process of the thread gets blocked.
• Since only one thread can be able to access a thread at a time,
multiple threads cannot be run parallelly on a multiprocessor.
• If an implementation of user-level threads is done in an
operating system in a way such that the system does not
support it then the kernel uses the many-to-one relationship
model.
One-to-one relationship.
• The user-level thread has a one-to-one relationship
with the kernel-level thread.
• It is better than the many-to-one model as it
provides more concurrency.
• When a blocking system call is made by a thread, it
allows another thread to execute. Supports
execution of multiple threads in parallel on
a microprocessor.
Advantages of Multithreading
• Minimize the time of context switching- Context Switching is
used for storing the context or state of a process so that it can
be reloaded when required.
• By using threads, it provides concurrency within a process-
Concurrency is the execution of multiple instruction
sequences at the same time.
• Creating and context switching is economical - Thread
switching is very efficient because it involves switching out
only identities and resources such as the program counter,
registers, and stack pointers.
• Allows greater utilization of multiprocessor architecture
Disadvantages of Multithreading
• A multithreading system operates without
interruptions.
• The code might become more intricate to
comprehend.
• The costs associated with handling various threads
could be excessive for straightforward tasks.
• Identifying and resolving problems may become
more demanding due to the intricate nature of the
code.
Thread Libraries
• Thread libraries in an operating system provide the
necessary functions and APIs for managing threads
within processes.
• They enable parallel execution, synchronization, and
inter-thread communication.
• Different operating systems and programming
languages offer them own thread libraries.
• Thread libraries may be implemented either
in user space or kernel space library.
• Below are some common thread libraries used in
different operating systems:
1. POSIX Threads (Pthreads)
•OS: UNIX-based systems (Linux, macOS, etc.).
•Description: A widely used threading library based on the
POSIX standard.
•It provides APIs for thread creation, management,
synchronization(e.g., mutexes, condition variables), and
other thread-related operations.
•Usage: Written in C/C++ programs.
•Key Functions:
•pthread_create(): Creates a new thread.
•pthread_join(): Waits for a thread to terminate.
•pthread_mutex_lock(),
•pthread_mutex_unlock(): Synchronization primitives.
•pthread_exit(): Terminates a thread.
2. Windows Threads
•OS: Windows.
•Description: The Windows API provides native threading
capabilities for managing threads in Windows applications.
•Usage: Used in C/C++ applications on Windows.
•Key Functions:
•CreateThread(): Creates a new thread.
•WaitForSingleObject(): Waits for a thread to finish
execution.
•TerminateThread(): Terminates a thread.
•Synchronization: EnterCriticalSection(), LeaveCriticalSect
ion(), WaitForMultipleObjects().
3. Java Thread Library
•OS: Cross-platform (as it runs on the JVM).
•Description: Java provides built-in threading support
through the java.lang.Thread class
and java.util.concurrent package.
•Usage: Used in Java programs.
•Key Methods:
•Thread.start(): Starts a thread.
•Thread.sleep(): Pauses execution for a specified time.
•synchronized: Synchronizes threads on an object or
block.
4. GNU Portable Threads (Gnu Pth)
• OS: Linux and UNIX systems.
• Description: Provides a user-space
implementation of threads. It is lightweight and
used in applications requiring high portability.
• Usage: Primarily in older systems or when POSIX threads
are not available.
Threading issues
• Threading issues occur when multiple threads in a
program interact in unintended ways, often leading to
race conditions, deadlocks, or data corruption.
• Common Threading Issues:
1. Race Conditions
1.Occur when multiple threads access shared data
and modify it simultaneously, leading to
unpredictable results.
2.Example: Two threads incrementing a shared
counter at the same time without synchronization.
Threading issues
2. Deadlocks
1.Happen when two or more threads are waiting for
resources held by each other, causing an infinite
wait.
2.Example: Thread A locks Resource 1 and waits for
Resource 2, while Thread B locks Resource 2 and
waits for Resource 1.
3.Livelocks
1.Similar to deadlocks, but threads keep changing
states in response to each other without making
progress.
Threading issues
4. Starvation
1.When a thread is perpetually denied access to a
resource because other higher-priority threads keep
accessing it first.
5. Data Inconsistency
1.Occurs when shared variables are modified
without proper synchronization, leading to
inconsistent or corrupt data.
Thread Scheduling
• Scheduling of threads involves two boundary
scheduling.
1. Scheduling of user-level threads (ULT) to kernel-level
threads (KLT) via lightweight process (LWP) by the
application developer.

2. Scheduling of kernel-level threads by the system


scheduler to perform different unique OS functions.
Lightweight Process (LWP)
• Light-weight process are threads in the user space
that acts as an interface for the ULT to access the
physical CPU resources.
• Thread library schedules which thread of a process
to run on which LWP and how long.
• The number of LWPs created by the thread library
depends on the type of application.
• In the case of an I/O bound application, the number
of LWPs depends on the number of user-level
threads.
• This is because when an LWP is blocked on an I/O operation,
then to invoke the other ULT the thread library needs to
create and schedule another LWP.
• Thus, in an I/O bound application, the number of LWP is
equal to the number of the ULT.
• In the case of a CPU-bound application, it depends only on
the application. Each LWP is attached to a separate kernel-
level thread.
• In real-time, the first boundary of thread scheduling is
beyond specifying the scheduling policy and the
priority.
• It requires two controls to be specified for the User
level threads: Contention scope, and Allocation
domain.
Contention Scope
• The word contention here refers to the competition or
fight among the User level threads to access the
kernel resources. Thus, this control defines the extent
to which contention takes place. It is defined by the
application developer using the thread library.
Depending upon the extent of contention it is classified as-
• Process Contention Scope (PCS) :The contention takes place
among threads within a same process. The thread library
schedules the high-prioritized PCS thread to access the
resources via available LWPs (priority as specified by the
application developer during thread creation).

• System Contention Scope (SCS) :The contention takes place


among all threads in the system. In this case, every SCS
thread is associated to each LWP by the thread library and are
scheduled by the system scheduler to access the kernel
resources. In LINUX and UNIX operating systems, the POSIX
Pthread library provides a function Pthread_attr_setscope to
define the type of contention scope for a thread during its
creation.
Allocation Domain
• The allocation domain is a set of one or more
resources for which a thread is competing.
• In a multicore system, there may be one or more
allocation domains where each consists of one or
more cores.
• One ULT can be a part of one or more allocation
domain. Due to this high complexity in dealing with
hardware and software architectural interfaces, this
control is not specified.
• But by default, the multicore system will have an
interface that affects the allocation domain of a
thread.

You might also like