Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
466 views

Threads

A thread is the basic unit of CPU utilization that comprises a thread ID, program counter, register set, and stack. It shares resources like code, data, files, and signals with other threads in the same process. Processes can be single-threaded or multithreaded. Multithreading allows a process to have multiple concurrent execution paths to perform multiple tasks simultaneously. Threads provide benefits like increased responsiveness, resource sharing, and scalability compared to processes. Threads have execution states like ready, running, blocked, and can synchronize access to shared resources.

Uploaded by

21PC12 - GOKUL D
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
466 views

Threads

A thread is the basic unit of CPU utilization that comprises a thread ID, program counter, register set, and stack. It shares resources like code, data, files, and signals with other threads in the same process. Processes can be single-threaded or multithreaded. Multithreading allows a process to have multiple concurrent execution paths to perform multiple tasks simultaneously. Threads provide benefits like increased responsiveness, resource sharing, and scalability compared to processes. Threads have execution states like ready, running, blocked, and can synchronize access to shared resources.

Uploaded by

21PC12 - GOKUL D
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Threads

Overview
• A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter, a register set, and a stack in Thread Control Block (TCB).
• It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
• A traditional (or heavyweight) process has a single thread of control.
• If a process has multiple threads of control, it can perform more than one task at
a time.
• The figure below illustrates the difference between a traditional single-threaded
process and a multithreaded process.
Single and Multithreaded Processes
Multithreading
• The ability of an OS to support multiple, concurrent paths of execution within a
single process - multithreading.
• MS-DOS supports a single user process and a single thread.
• Some UNIX, support multiple user processes but only support one thread per
process
• Java run-time environment is a single process with multiple threads
• Multiple processes and threads are found in Windows, Solaris, and many modern
versions of UNIX
Motivation
• Most software applications that run on modern computers are multithreaded.
• An application typically is implemented as a separate process with several
threads of control. For example:
• A web browser might have one thread display images or text while another
thread retrieves data from the network.
• A word processor may have a thread for displaying graphics, another thread
for responding to keystrokes from the user, and a third thread for performing
spelling and grammar checking in the background.
• Applications can also be designed to leverage processing capabilities on multicore
systems.
• Such applications can perform several CPU-intensive tasks in parallel across the
multiple computing cores.
Motivation
• In certain situations, a single application may be required to perform several
similar tasks.
• For example, a web server accepts client requests for web pages, images,
sound, and so forth.
• A busy web server may have several (perhaps thousands of) clients
concurrently accessing it.
• If the web server ran as a traditional single-threaded process, it would be able
to service only one client at a time, and a client might have to wait a very long
time for its request to be serviced.
• One solution is to have the server run as a single process that accepts
requests.
• When the server receives a request, it creates a separate process to service
that request.
Motivation
• In fact, this process-creation method was in common use before threads became
popular.
• Process creation is time consuming and resource intensive, however.
• If the new process will perform the same tasks as the existing process, why incur
all that overhead?
• It is generally more efficient to use one process that contains multiple threads.
• If the web-server process is multithreaded, the server will create a separate
thread that listens for client requests.
• When a request is made, rather than creating another process, the server
creates a new thread to service the request and resume listening for
additional requests.
Motivation
• Finally, most operating-system kernels are now multithreaded.
• Several threads operate in the kernel, and each thread performs a specific task,
such as managing devices, managing memory, or interrupt handling.
• For example, Solaris has a set of threads in the kernel specifically for interrupt
handling;
• Linux uses a kernel thread for managing the amount of free memory in the
system.
Process vs Threads
• Process
• A virtual address space which holds the process image
• Protected access to
• Processors, Other processes, Files, I/O resources
• One or More Threads in Process
• Each thread has
• An execution state (running, ready, etc.)
• Saved thread context when not running
• An execution stack
• Some per-thread static storage for local variables
• Access to the memory and resources of its process (all threads of a process share
this)
Single Threaded and Multithreaded process
model
Benefits of Threads
• Takes less time to create a new thread than a process
• Less time to terminate a thread than a process
• Switching between two threads takes less time that switching processes
• Threads can communicate with each other
• without invoking the kernel
• Responsiveness.
• Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the user.
Benefits of Threads
• Resource sharing.
• Processes can only share resources through techniques such as shared
memory and message passing.
• Such techniques must be explicitly arranged by the programmer.
• However, threads share the memory and the resources of the process to
which they belong by default.
• The benefit of sharing code and data is that it allows an application to have
several different threads of activity within the same address space.
Benefits of Threads
• Economy.
• Allocating memory and resources for process creation is costly.
• Because threads share the resources of the process to which they belong, it is
more economical to create and context-switch threads.
• In Solaris, for example, creating a process is about thirty times slower than is
creating a thread, and context switching is about five times slower.
• Scalability.
• The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing
cores.
• A single-threaded process can run on only one processor, regardless how
many are available.
Threads
• Several actions that affect all of the threads in a process
• The OS must manage these at the process level.
• Examples:
• Suspending a process involves suspending all threads of the process(ULT)
• Termination of a process, terminates all threads within the process
• Threads have execution states and may synchronize with one another.
• Similar to processes
• We look at these two aspects of thread functionality in turn.
• States
• Synchronization
Thread Execution States
• States associated with a change in thread state
• Spawn (another thread)
• Block
• Issue: will blocking a thread block other, or all, threads
• Unblock
• Ready
• Running
• Finish (thread)
• Deallocate register context and stacks
Thread Synchronization
• All the threads of a process share the same address space and other resources
such as open files.
• Any alteration of a resource by one thread affects the environment of the other
threads in the same process
• It is therefore necessary to synchronize the activities of various threads so that
they do not interfere with each other or corrupt data structures.
• Ex:
• If two threads each try to add an element to a doubly linked list at same time,
one element may be lost or the list may end up malformed.
Types of threads
• User Level Thread (ULT)
• Kernel level Thread (KLT) also called:
• kernel-supported threads
• lightweight processes
User-Level Threads
• All thread management is done by the application
• The kernel is not aware of the existence of threads
• Any application can be programmed to be
multithreaded by using threads library
• By default, application begins with single thread of
execution
• At run time application may spawn a new thread
within the process using thread library
User-Level Threads
• The thread library creates a data structure for new thread and passes control to
one of the threads in ready state within the process using a scheduling algorithm.
• Whenever control passed to thread library, context of the current thread saved
and when control is passed from library to a thread, the context of that thread
(user registers, program counter and stack pointer) restored.
• The kernel unaware of this thread activity continues to schedule the process as a
unit and assigns a single execution state to that process.
Relationships between ULT
Thread and Process States
Relationships between ULT
Thread and Process States
a) Process B executing in its thread 2; and the state of process and its two ULTs
are shown in Figure 4.7(a)
b) The application executing in thread 2 makes a system call that blocks B.
• For eg: I/O call is made.
• Process B now placed in blocked state.
• So process switching happens.
• According to data structure maintained by thread library, thread 2 of process B still in the
Running state.
Relationships between ULT
Thread and Process States
c) A clock interrupt passes control to the kernel, and the kernel determines that
the currently running process has exhausted its time slice.
• The kernel places process B in ready state and switches to another process.
• But according to data structure maintained by thread library, thread 2 of process B is still in
running state.
d) Thread 2 has reached a point where it needs some action performed by thread 1
of process B.
• Thread 2 enters a blocked state and thread 1 transitions from ready to running.
• The process itself remains in the Running state.
Advantages of ULTs

ULTs can
Thread Scheduling can run on
any OS
be application specific

Thread switching does not


require kernel mode privileges
(no mode switches)
Disadvantages of ULTs

• In a typical OS many system calls are blocking


 as a result, when a ULT executes a system
call, not only is that thread blocked, but all
of the threads within the process are
blocked
• In a pure ULT strategy, a multithreaded application
cannot take advantage of multiprocessing
Overcoming ULT Disadvantages

Jacketing –> Application-level routine


• converts a blocking system call into a non-blocking system
call (like checking for I/O device status before initiating an
I/O request)

Writing an application as multiple


processes rather than multiple
threads
Kernel-Level Threads (KLTs)
 Thread management is done by the kernel
 The kernel maintains the context information
of a process in whole, as well as thread
context within the process
 no thread management is done by the
application
Windows is an example of this
approach
Advantages of KLTs

• The kernel can simultaneously schedule multiple


threads from the same process on multiple
processors
• If one thread in a process is blocked, the kernel can
schedule another thread of the same process
• Kernel routines themselves can be multithreaded
Disadvantage of KLTs
The transfer of control from one thread to
another within the same process requires a
mode switch to the kernel
Combined Approaches
• Thread creation is done in the user space
• Bulk of scheduling and synchronization of
threads is by the application
• Multiple ULTs from a single application are
mapped on to some number of KLTs
• Multiple threads of same application can run
in parallel on multiple processors
• Advantages of both ULT and KLT.
• Solaris is an example
Relationship Between
Threads and Processes

Table 4.2 Relationship between Threads and Processes


Multithreading Models – mapping user –level
threads to kernel level threads
• Many-to-One
• One-to-One
• Many-to-Many

Operating System Concepts


Many-to-One
• Many user-level threads mapped to single kernel thread.
• Used on systems that do not support kernel threads.
• Thread management is done by the thread library in user space, so it is efficient.
• However, the entire process will block if a thread makes a blocking system call.
• Also, because only one thread can access the kernel at a time, multiple threads
are unable to run in parallel on multicore systems.

Operating System Concepts


Many-to-One Model

Operating System Concepts


One-to-One
• Each user-level thread maps to kernel thread.
• It provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call.
• It also allows multiple threads to run in parallel on multiprocessors.
• The only drawback to this model is that creating a user thread requires creating
the corresponding kernel thread.
• Because the overhead of creating kernel threads can burden the performance of
an application, most implementations of this model restrict the number of
threads supported by the system.
• Examples
- Windows 95/98/NT/2000 , OS/2
Operating System Concepts
One-to-one Model

Operating System Concepts


Many-to-Many Model
• Allows many user level threads to be mapped to many kernel threads.
• Allows the operating system to create a sufficient number of kernel threads.
• The number of kernel threads may be specific to either a particular application or
a particular machine (an application may be allocated more kernel threads on a
multiprocessor than on a single processor).
• Solaris 2
• Windows NT/2000 with the ThreadFiber package

Operating System Concepts


Many-to-Many Model

Operating System Concepts


Thread Libraries
• A thread library provides the programmer with an API for creating and managing
threads.
• There are two primary ways of implementing a thread library.
• The first approach is to provide a library entirely in user space with no kernel
support.
• All code and data structures for the library exist in user space.
• This means that invoking a function in the library results in a local function call in
user space and not a system call.
Thread Libraries
• The second approach is to implement a kernel-level library supported directly by
the operating system.
• In this case, code and data structures for the library exist in kernel space.
• Invoking a function in the API for the library typically results in a system call to the
kernel.

• Three main thread libraries are in use today:


• POSIX Pthreads, Windows, and Java.
Thread Libraries
• Pthreads, the threads extension of the POSIX standard, may be provided as either
a user-level or a kernel-level library.
• The Windows thread library is a kernel-level library available on Windows
systems.
• The Java thread API allows threads to be created and managed directly in Java
programs.
• However, because in most instances the JVM is running on top of a host operating
system, the Java thread API is generally implemented using a thread library
available on the host system.
• This means that on Windows systems, Java threads are typically implemented
using the Windows API; UNIX and Linux systems often use Pthreads.
Thread Libraries – Pthreads in Linux and Unix
Thread Scheduling
• One distinction between user-level and kernel-level threads lies in how they are
scheduled.
• On systems implementing the many-to-one and many-to-many models, the
thread library schedules user-level threads to run on an available Light Weight
Process(LWP).
• This scheme is known as process contention scope (PCS), since competition for
the CPU takes place among threads belonging to the same process.
• (When we say the thread library schedules user threads onto available LWPs, we
do not mean that the threads are actually running on a CPU. That would require
the operating system to schedule the kernel thread onto a physical CPU.)
Thread Scheduling
• To decide which kernel-level thread to schedule onto a CPU, the kernel uses system-contention
scope (SCS).
• Competition for the CPU with SCS scheduling takes place among all threads in the system.
• Systems using the one-to-one model, such as Windows, Linux, and Solaris, schedule threads
using SCS.
• Typically, PCS is done according to priority—the thread library scheduler selects the runnable
thread with the highest priority to run.
• User-level thread priorities are set by the programmer and are not adjusted by the thread
library, although some thread libraries may allow the programmer to change the priority of a
thread.
• It is important to note that PCS will typically preempt the thread currently running in favor of a
higher-priority thread; however, there is no guarantee of time slicing among threads of equal
priority.
Pthread Scheduling
• Pthreads identifies the following contention scope values:
• PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling.
• PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling.
• On systems implementing the many-to-many model, the
PTHREAD_SCOPE_PROCESS policy schedules user-level threads onto available
LWPs.
• The number of LWPs is maintained by the thread library, perhaps using scheduler
activations.
• The PTHREAD_SCOPE_SYSTEM scheduling policy will create and bind an LWP for
each user-level thread on many-to-many systems, effectively mapping threads
using the one-to-one policy.
Pthread Scheduling
• The Pthread IPC provides two functions for getting—and setting—the contention
scope policy:
• pthread_attr_setscope(pthread attr_t *attr, int scope)
• pthread_attr_getscope(pthread attr_t *attr, int *scope)
• The first parameter for both functions contains a pointer to the attribute set for
the thread.
• The second parameter for the pthread_attr_setscope() function is passed either
the PTHREAD_SCOPE_SYSTEM or the PTHREAD_SCOPE_PROCESS value,
indicating how the contention scope is to be set.
• In the case of pthread_attr_getscope(), this second parameter contains a pointer
to an int value that is set to the current value of the contention scope.
• If an error occurs, each of these functions returns a nonzero value.
Sample pthread pgm using contention scope
Threading Issues
• The fork() and exec() System Calls
• Signal Handling
• Thread Cancellation
• Thread-Local Storage
• Scheduler Activations
• One scheme for communication between the user-thread library and the kernel is known
as scheduler activation.
• It works as follows: The kernel provides an application with a set of virtual processors (LWPs),
and the application can schedule user threads onto an available virtual processor.
• Furthermore, the kernel must inform an application about certain events.

You might also like