Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Threads

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Tishk International University

Faculty of engineering (computer engineering)


Operating Systems

Threads
Name: Abdullah Dara
Overview:
•A thread is a basic unit of CPU utilization, consisting of a program counter, a
stack, and a set of registers, (and a thread ID.)
•Traditional (heavyweight) processes have a single thread of control - There is
one program counter, and one sequence of instructions that can be carried out at
any given time.
•multi-threaded applications have multiple threads within a single process, each
having their own program counter, stack and set of registers, but sharing
common code, data, and certain structures such as open files.

Motivation:
•Threads are very useful in modern programming whenever a process has
multiple tasks to perform independently of the others.
•This is particularly true when one of the tasks may block, and it is desired to
allow the other tasks to proceed without blocking.
•For example, in a word processor, a background thread may check spelling
and grammar while a foreground thread processes user input (keystrokes),
while yet a third thread loads images from the hard drive, and a fourth does
periodic automatic backups of the file being edited.
•Another example is a web server - Multiple threads allow for multiple requests
to be satisfied simultaneously, without having to service requests sequentially
or to fork off separate processes for every incoming request. (The latter is how
this sort of thing was done before the concept of threads was developed. A
daemon would listen at a port, fork off a child for every incoming request to be
processed, and then go back to listening to the port.)
Benefits:
•There are four major categories of benefits to multi-threading:
1.Responsiveness - One thread may provide rapid response while other
threads are blocked or slowed down doing intensive calculations.
2.Resource sharing - By default threads share common code, data, and
other resources, which allows multiple tasks to be performed
simultaneously in a single address space.
3.Economy - Creating and managing threads (and context switches
between them) is much faster than performing the same tasks for
processes.
4.Scalability, i.e., Utilization of multiprocessor architectures - A single
threaded process can only run on one CPU, no matter how many may be
available, whereas the execution of a multi-threaded application may be
split amongst available processors. (Note that single threaded processes
can still benefit from multi-processor architectures when there are
multiple processes contending for the CPU, i.e., when the load average is
above some certain threshold.)

Multicore programming:
• A recent trend in computer architecture is to produce chips with
multiple cores, or CPUs on a single chip.
• A multi-threaded application running on a traditional single-core chip
would have to interleave the threads. On a multi-core chip, however, the
threads could be spread across the available cores, allowing true parallel
processing,
• For operating systems, multi-core chips require new scheduling
algorithms to make better use of the multiple cores available.
• As multi-threading becomes more pervasive and more important
(thousands instead of tens of threads), CPUs have been developed to
support more simultaneous threads per core in hardware.
Programming challenges:
For application programmers, there are five areas where multi-core chips
present new challenges:
1.Identifying tasks - Examining applications to find activities that can be
performed concurrently.
2.Balance - Finding tasks to run concurrently that provide equal value. I.e. don't
waste a thread on trivial tasks.
3.Data splitting - To prevent the threads from interfering with one another.
4.Data dependency - If one task is dependent upon the results of another, then
the tasks need to be synchronized to assure access in the proper order.
5.Testing and debugging - Inherently more difficult in parallel processing
situations, as the race conditions become much more complex and difficult to
identify.

Types of parallelism:
In theory there are two different ways to parallelize the workload:
1.Data parallelism divides the data up amongst multiple cores (threads), and
performs the same task on each subset of the data. For example, dividing a
large image up into pieces and performing the same digital image processing on
each piece on different cores.
2.Task parallelism divides the different tasks to be performed among the
different cores and performs them simultaneously.
In practice no program is ever divided up solely by one or the other of these,
but instead by some sort of hybrid combination
Multithreading Models:
• There are two types of threads to be managed in a modern system: User
threads and kernel threads.
• User threads are supported above the kernel, without kernel support.
These are the threads that application programmers would put into their
programs.
• Kernel threads are supported within the kernel of the OS itself. All
modern OSes support kernel level threads, allowing the kernel to
perform multiple simultaneous tasks and/or to service multiple kernel
system calls simultaneously.
• In a specific implementation, the user threads must be mapped to kernel
threads, using one of the following strategies

Many-to-one model:
•In the many-to-one model, many user-level threads are all mapped onto a
single kernel thread.
•Thread management is handled by the thread library in user space, which is
very efficient.
•However, if a blocking system call is made, then the entire process blocks,
even if the other user threads would otherwise be able to continue.
•Because a single kernel thread can operate only on a single CPU, the many-to-
one model does not allow individual processes to be split across multiple
CPUs.
•Green threads for Solaris and GNU Portable Threads implement the many-to-
one model in the past, but few systems continue to do so today.
One-to-one model:
•The one-to-one model creates a separate kernel thread to handle each user
thread.
•One-to-one model overcomes the problems listed above involving blocking
system calls and the splitting of processes across multiple CPUs.
•However, the overhead of managing the one-to-one model is more significant,
involving more overhead and slowing down the system.
•Most implementations of this model place a limit on how many threads can be
created.
•Linux and Windows from 95 to XP implement the one-to-one model for
threads.

Many-to-many model:
•The many-to-many model multiplexes any number of user threads onto an
equal or smaller number of kernel threads, combining the best features of the
one-to-one and many-to-one models.
•Users have no restrictions on the number of threads created.
•Blocking kernel system calls do not block the entire process.
•Processes can be split across multiple processors.
•Individual processes may be allocated variable numbers of kernel threads,
depending on the number of CPUs present and other factors
•One popular variation of the many-to-many model is the two-tier model,
which allows either many-to-many or one-to-one operation.
•IRIX, HP-UX, and Tru64 UNIX use the two-tier model, as did Solaris prior to
Solaris 9.
Thread pools:
•Creating new threads every time one is needed and then deleting it when it is
done can be inefficient, and can also lead to a very large (unlimited) number of
threads being created.
•An alternative solution is to create a number of threads when the process first
starts, and put those threads into a thread pool.
o Threads are allocated from the pool as needed, and returned to the pool
when no longer needed.
o When no threads are available in the pool, the process may have to
wait until one becomes available.
•The (maximum) number of threads available in a thread pool may be
determined by adjustable parameters, possibly dynamically in response to
changing system loads.
•Win32 provides thread pools through the "PoolFunction" function. Java also
provides support for thread pools through the java.util.concurrent package, and
Apple supports thread pools under the Grand Central Dispatch architecture.

Conclusion:
A thread is a flow of control within a process. A multithreaded process contains
several different flows of control within the same address space. The benefits
of multithreading include increased responsiveness to the user, resource sharing
within the process, economy, and scalability factors, such as more efficient use
of multiple processing cores. User-level threads are threads that are visible to
the programmer and are unknown to the kernel. The operating-system kernel
supports and manages kernel-level threads. In general, user-level threads are
faster to create and manage than are kernel threads, because no intervention
from the kernel is required.
References:
1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating
System Concepts, Ninth Edition ", Chapter 4

You might also like