Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

Chapter 4: Threads &

Concurrency

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Chapter 4: Threads
 Overview
 Multicore Programming
 Multithreading Models
 Thread Libraries
 Threading Issues
 Operating System Examples

Operating System Concepts – 10th Edition 4.2 Silberschatz, Galvin and Gagne ©2018
Objectives
 Identify the basic components of a thread, and contrast threads
and processes
 Describe the benefits and challenges of designng
multithreaded applications
 Describe how the Windows and Linux operating systems
represent threads
 Design multithreaded applications using the Pthreads

Operating System Concepts – 10th Edition 4.3 Silberschatz, Galvin and Gagne ©2018
Overview
 A thread is a basic unit of CPU utilization; it comprises a:
 thread ID
 program counter (PC)
 register set
 stack
 A thread shares with other threads belonging to the same
process:
 its code section
 data section
 other operating-system resources, such as open files and
signals.

Operating System Concepts – 10th Edition 4.4 Silberschatz, Galvin and Gagne ©2018
Single and Multithreaded Processes
 A traditional process has a single thread of control.
 If a process has multiple threads of control, it can perform more than
one task at a time.

Operating System Concepts – 10th Edition 4.5 Silberschatz, Galvin and Gagne ©2018
Motivation

 Most modern applications are multithreaded


 An application typically is implemented as a separate process
with several threads of control
 Some examples of multithreaded applications:
 use a separate thread to generate a thumbnail from each
separate image
 Web browser: one thread display images or text, another
to retrieve data from the network
 Word processor: A thread for displaying graphics, another
for responding to input, and a third thread for spell
checking in the background.
 Process creation is heavy-weight while thread creation is
light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded

Operating System Concepts – 10th Edition 4.6 Silberschatz, Galvin and Gagne ©2018
Motivation
 A busy web server may receive several (or thousands of)
clients concurrently accessing it.
 A single-threaded process would be able to service only one
client at a time
 One solution (old): create a separate process to service each
request. Process creation is time consuming and resource
intensive
 If the web-server process is multithreaded, it is more efficient
to create a new thread to service a new request and then
resume listening

Operating System Concepts – 10th Edition 4.7 Silberschatz, Galvin and Gagne ©2018
Benefits

 Responsiveness – may allow continued execution if part of


process is blocked, especially important for user interfaces
 Resource Sharing – threads share resources of process, easier
than shared memory or message passing
 Economy – cheaper than process creation, thread switching
lower overhead than context switching
 Scalability – process can take advantage of multicore
architectures as threads may be running in parallel on different
processing cores

Operating System Concepts – 10th Edition 4.8 Silberschatz, Galvin and Gagne ©2018
Multicore Programming Challenges

 Identifying tasks: This involves examining applications to find


areas that can be divided into separate, concurrent tasks.
 Balance: Ensure tasks perform equal work of equal value. In
some instances, a certain task may not contribute as much value
to the overall process as other tasks. Using a separate execution
core to run that task may not be worth the cost.
 Data splitting. data accessed and manipulated by the tasks
must be divided to run on separate cores.
 Data dependency. The data accessed by the tasks must be
examined for dependencies between two or more tasks. When
one task depends on data from another, programmers must
ensure that the execution of the tasks is synchronized to
accommodate the data dependency.
 Testing and Debugging. When a program is running in parallel
on multiple cores, many different execution paths are possible.
Testing and debugging such concurrent programs is more difficult
than testing and debugging single-threaded applications.

Operating System Concepts – 10th Edition 4.9 Silberschatz, Galvin and Gagne ©2018
Concurrency and Parallelism

 A concurrent system supports more than one task by allowing all the
tasks to make progress.
 With a single computing core, concurrency merely means that the
execution of the threads will be interleaved over time, because the
processing core is capable of executing only one thread at a time.
 On a system with multiple cores, concurrency means that some
threads can run in parallel.

 A parallel system can perform more than one task simultaneously.


Thus, it is possible to have concurrency without parallelism.

Operating System Concepts – 10th Edition 4.10 Silberschatz, Galvin and Gagne ©2018
Concurrency vs. Parallelism
 Concurrent execution on single-core system:

 Parallelism on a multi-core system:

Operating System Concepts – 10th Edition 4.11 Silberschatz, Galvin and Gagne ©2018
Multicore Programming
 Types of parallelism
 Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
 Task parallelism – distributing tasks (threads) across cores,
each thread performing unique operation

Operating System Concepts – 10th Edition 4.12 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law
 Identifies performance gains from adding additional cores to an
application that has both serial and parallel components

• S: is the portion of the application


that must be performed serially
• N: processing cores

 That is, if application is 75% parallel / 25% serial, moving from 1 to 2


cores results in speedup of 1.6 times
 As N approaches infinity, speedup approaches 1 / S
 The fundamental principle behind Amdahl’s Law is that the serial portion
of an application has disproportionate effect on the performance gained
by adding additional cores
 Amdahl's law applies only when dataset is fixed
 In practice, added computing resources are used on larger datasets and
the time spent in the parallelizable part grows much faster than the
inherently serial work.

Operating System Concepts – 10th Edition 4.13 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law

Operating System Concepts – 10th Edition 4.14 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law

Operating System Concepts – 10th Edition 4.15 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law
 A task with two independent parts, A and B.
 Part B takes 25% of the time of the whole computation
 Making part 5 times faster, the whole computation
speedup is increased only slightly
 Make part A perform 2 times faster to make the
computation much faster than by optimizing part B

Operating System Concepts – 10th Edition 4.16 Silberschatz, Galvin and Gagne ©2018
User Threads and Kernel Threads

 User threads - User threads are supported above the kernel and are
managed without kernel support,
 Kernel threads - kernel threads are supported and managed directly by the
operating system. Supported by almost all general purpose operating
systems, including:
 Windows
 Linux
 Mac OS X
 iOS
 Android

Operating System Concepts – 10th Edition 4.17 Silberschatz, Galvin and Gagne ©2018
Multithreading Models

 Many-to-One

 One-to-One

 Many-to-Many

Operating System Concepts – 10th Edition 4.18 Silberschatz, Galvin and Gagne ©2018
Many-to-One

 Many user-level threads mapped to single kernel thread


 Thread management is done by the thread library in user space
 One thread blocking causes all to block
 Multiple threads may not run in parallel on muticore system because only
one may be in kernel at a time
 Very few systems still use the model because of its inability to take
advantage of multiple processing cores

Operating System Concepts – 10th Edition 4.19 Silberschatz, Galvin and Gagne ©2018
One-to-One
 Each user-level thread maps to a kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one, another thread can run when a
thread makes a blocking system call.
 Drawback: creating a user thread requires creating the
corresponding kernel thread. Large number of kernel threads may
burden the performance
 With the increasing number of processing cores, most operating
systems now use the one-to-one model
 Examples
 Windows
 Linux

Operating System Concepts – 10th Edition 4.20 Silberschatz, Galvin and Gagne ©2018
Many-to-Many Model
 This model multiplexes many user-
level threads to a smaller or equal
number of kernel threads
 Operating system can create a
sufficient number of kernel threads
 Not very common as with an
increasing number of processing
cores, limiting the number of kernel
threads has become less important.
 Difficult to implement.

 A variant of this model referred to as


two-level model also allows a user
thread to be bound to a kernel thread

Operating System Concepts – 10th Edition 4.21 Silberschatz, Galvin and Gagne ©2018
Thread Libraries

 Thread library provides programmer with API for creating


and managing threads
 Two primary ways of implementing
 Library entirely in user space with no kernel support.
Code and data structures for the library exist in user
space. Invoking a function in the library results in a
local function call in user space and not a system call.
 Kernel-level library supported by the OS. Code and
data structures for the library exist in kernel space.
Invoking a function in the API for the library typically
results in a system call to the kernel.

Operating System Concepts – 10th Edition 4.22 Silberschatz, Galvin and Gagne ©2018
Main Thread Libraries

 POSIX Pthreads: refers to the POSIX standard (IEEE


1003.1c) defining an API for thread creation and
synchronization.
 This is a specification for thread behavior, not an
implementation.
 Either a user-level or a kernel-level library.
 Typically used in UNIX, Linux, and macOS
 Windows: kernel-level library
 Java: allows threads to be created and managed directly in
Java programs.
 JVM runs on top of a host OS, therefore, the Java
thread API is generally implemented using a thread
library available on the host system.

Operating System Concepts – 10th Edition 4.23 Silberschatz, Galvin and Gagne ©2018
Multiple Threading Strategies

 Asynchronous threading:
 Once the parent creates a child thread, the parent resumes its
execution, so that the parent and child execute concurrently and
independently of one another.
 Because the threads are independent, there is typically little data
sharing between them.
 Commonly used for designing responsive user interfaces
 Synchronous threading:
 The parent thread creates one or more children and then must
wait for all of its children to terminate before it resumes
 Threads created by the parent perform work concurrently
 Finished thread terminates and joins with its parent
 Significant data sharing among threads

Operating System Concepts – 10th Edition 4.24 Silberschatz, Galvin and Gagne ©2018
Pthreads Example

Operating System Concepts – 10th Edition 4.25 Silberschatz, Galvin and Gagne ©2018
Pthreads Example (cont)

Operating System Concepts – 10th Edition 4.26 Silberschatz, Galvin and Gagne ©2018
Pthreads Code for Joining 10 Threads

perating System Concepts – 9 th Edition 4. 21 Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 10th Edition 4.27 Silberschatz, Galvin and Gagne ©2018
Some Threading Issues
 Semantics of fork() and exec() system calls
 Signal handling
 Synchronous and asynchronous
 Thread cancellation of target thread
 Asynchronous or deferred
 Thread-local storage

Operating System Concepts – 10th Edition 4.28 Silberschatz, Galvin and Gagne ©2018
Semantics of fork() and exec()

 Does fork()duplicate only the calling thread or all


threads?
 Some UNIXes have two versions of fork
 exec() usually works as normal – if a thread invokes the
exec() system call, the program specified in the parameter
to exec() will replace the entire process—including all
threads.

Operating System Concepts – 10th Edition 4.29 Silberschatz, Galvin and Gagne ©2018
Signals
 Signals are used in UNIX systems to notify a process that a particular
event has occurred. Signals may be:
 Synchronous signals (like illegal memory access and division by 0)
are delivered to the same process that performed the operation
that caused the signal.
 Asynchronous signals are generated by an event external to the
running process. Examples include terminating a process with
(<control><C>) and having a timer expire.

Operating System Concepts – 10th Edition 4.30 Silberschatz, Galvin and Gagne ©2018
Signal Handling
All signals (synchronous or asynchronous) follow this pattern:
1. A signal is generated by the occurrence of a particular event.
2. The signal is delivered to a process.
3. Once delivered, the signal must be handled by a signal handler. Two
signal handlers:
1. Default: Every signal has default handler that kernel runs
when handling signal
2. user-defined: User-defined signal handler can override
default

Operating System Concepts – 10th Edition 4.31 Silberschatz, Galvin and Gagne ©2018
Signal Handling (Cont.)
 For single-threaded, signal delivered to process
 Where should a signal be delivered for multi-threaded?
 Deliver the signal to the thread to which the signal applies
 Deliver the signal to every thread in the process
 Deliver the signal to certain threads in the process
 Assign a specific thread to receive all signals for the process

 Synchronous signals need to be delivered to the thread causing the


signal and not to other threads in the process.
 Not as clear for asynchronous signals. Some asynchronous signals—
such as a signal that terminates a process (<control><C>, for example)
—should be sent to all threads.

Operating System Concepts – 10th Edition 4.32 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation
 Terminating a thread before it has finished. For example, if multiple threads
are concurrently searching a database and one thread returns the result, the
remaining threads might be cancelled.
 Thread to be canceled is target thread
 Two general approaches:
 Asynchronous cancellation terminates the target thread immediately
 Deferred cancellation allows the target thread to periodically check if it
should be cancelled
 Pthread code to create and cancel a thread:

Operating System Concepts – 10th Edition 4.33 Silberschatz, Galvin and Gagne ©2018
Operating System Examples

 Windows Threads
 Linux Threads

Operating System Concepts – 10th Edition 4.34 Silberschatz, Galvin and Gagne ©2018
Windows Threads
 Windows API – primary API for Windows applications
 Implements the one-to-one mapping
 Each thread contains
 A thread id
 Register set representing state of processor
 A program counter
 Separate user and kernel stacks for when thread runs in
user mode or kernel mode
 Private data storage area used by run-time libraries and
dynamic link libraries (DLLs)
 The register set, stacks, and private storage area are known as
the context of the thread

Operating System Concepts – 10th Edition 4.35 Silberschatz, Galvin and Gagne ©2018
Windows Threads (Cont.)

 The primary data structures of a thread include:


 ETHREAD (executive thread block) – It includes the
address of the routine in which the thread starts control. It
also includes pointer to process to which thread belongs
and another to KTHREAD, in kernel space
 KTHREAD (kernel thread block) – scheduling and
synchronization info, kernel-mode stack, pointer to TEB, in
kernel space
 TEB (thread environment block) – thread id, user-mode
stack, thread-local storage, in user space
 The ETHREAD and the KTHREAD exist entirely in kernel
space; this means that only the kernel can access them. The
TEB is a user-space data structure that is accessed when the
thread is running in user mode.

Operating System Concepts – 10th Edition 4.36 Silberschatz, Galvin and Gagne ©2018
Windows Threads Data Structures

Operating System Concepts – 10th Edition 4.37 Silberschatz, Galvin and Gagne ©2018
Linux Threads
 Linux refers to them as tasks rather than threads
 Thread creation is done through clone() system call
 clone() allows a child task to share the address space of the
parent task (process)
 Flags control behavior

 struct task_struct points to process data structures


(shared or unique)

Operating System Concepts – 10th Edition 4.38 Silberschatz, Galvin and Gagne ©2018

You might also like