Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
23 views

Lecture06 Threads

The document discusses threads and how they relate to processes. It defines threads as units of execution within a process that allow a process to have multiple simultaneous execution paths. Threads share the same memory as the process but have their own stack and register context. The document compares threads to processes and discusses advantages of multithreading like improved efficiency. It also covers thread states, operations like creation and blocking, and different models for implementing threads at the user and kernel level.

Uploaded by

Daniel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Lecture06 Threads

The document discusses threads and how they relate to processes. It defines threads as units of execution within a process that allow a process to have multiple simultaneous execution paths. Threads share the same memory as the process but have their own stack and register context. The document compares threads to processes and discusses advantages of multithreading like improved efficiency. It also covers thread states, operations like creation and blocking, and different models for implementing threads at the user and kernel level.

Uploaded by

Daniel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Threads

CS3008 Operating Systems


Lecture 06
Process
• Process is defined as the unit of resource allocation and
protection
– Protected access to processors, other processes, files, I/O
• Processes embody two characteristic concepts
– Resource ownership:
• Processes own and address space (Virtual memory space to hold the
process image)
• Processes own a set of resources (I/O devices, I/O channels, files, main
memory)
• Operating system protects process to prevent unwanted interference
between processes
– Scheduling / execution
• Process has execution state (running, ready etc.) and dispatching
priority
• Execution follows an execution path (trace) through one or more
programs
• This execution may be interleaved with that of other processes
Threads
• Processes have at least one thread of control
– Is the CPU context, when process is dispatched for
execution
• Goal
– Allow processes to have multiple threads of
control
– Multiple threads run in the same address space,
share the same memory space
• The spawning of a thread only creates a new thread
control structure
Threads
• The unit of dispatching in modern operating systems is
usually a thread
– Represent a single thread of execution within a process
– Operating system can manage multiple threads of
execution within a process
– The thread is provided with its own register context and
stack space
– It is placed in the ready queue
– Threads are also called “lightweight processes”
• The unit of resource ownership is the process
– When a new process is created, at least one thread within
the process is created as well
– Threads share resources owned by a process (code, data,
file handles)
Multithreading
• Multithreading is the ability of an operating
system to support multiple threads of execution
within a single process
• Threads are described by the following:
– Thread execution state
• running, ready, blocked
– Thread Control Block
• A saved thread context when not running (each thread has a
separate program counter)
– An execution stack
– Some per-thread static storage for local variables
– Access to memory and resources of its process, shared
with all other threads of that process
Multithreaded Process Model
Threads
• All threads share the same address space
– Share global variables
• All threads share the same open files, child
processes, signals, etc.
• There is no protection between threads
– As they share the same address space they may
overwrite each others data
• As a process is owned by one user, all threads
are owned by one user
Threads vs Proceses
• Less time to create a new thread (10-times
faster than process creation in Unix)
• Less time to terminate a thread
• Less time to switch between threads
• Threads enhance efficiency in communication
between different communicating programs,
no need to call kernel routines, as all threads
live in same process context
Threads vs Processes
• Advantages of Threads
– Much faster to create a thread than a process
• Spawning a new thread only involves allocating a new stack and a
new thread control block
– Much faster to switch between threads than to switch
between processes
– Threads share data easily
• Disadvantages
– Processes are more flexible
• They don’t have to run on the same processor
– No protection between threads
• Share same memory, may interfere with each other
– If threads are implemented as user threads instead of
kernel threads
• If one thread blocks, all threads in process block
Thread States
Wait-for-event Running 2.
1. Dispatch
Timeout

Blocked 3. Ready
4.
Event-occurred

• Threads have three states


– Running: CPU executes thread
– Ready: thread control block is placed in Ready queue
– Blocked: thread awaits event
Thread Operations
• There are four basic operation for managing threads
– Spawn / create
• A thread is created and provided with its own register context and
stack space, it can spawn further threads
• It is placed on the Ready queue
– Block:
• if a thread waits for an event, it will block and placed on the event
queue
• the processor may switch to another thread in the same or a different
process
– Unblock:
• When the event occurs, for which the thread is waiting, it is placed on
the ready queue
– Finish:
• When a thread completes, its register context and stacks are de-
allocated
Thread Programming
• POSIX standard threads: pthreads
• Describes an API for creating and managing
threads
• There is at least one thread that is created by
executing main()
• Other threads are spawned / created from this
initial thread
POSIX Thread Programming
• Thread creation
pthread_create ( thread, attr, start_routine, arg )

– Returns a new thread ID with parameter “thread”


– Executes the routine specified by “start_routine”
with argument specified by “arg”
• Thread termination
pthread_exit ( status )

– Terminates the thread, sends “status” to any


thread waiting by calling pthread_join()
POSIX Thread Programming
• Thread synchronisation
pthread_join ( threadid, status)

– Blocks the calling thread until the thread specified by


“threadid” terminates
– The argument “status” passes on the return status of
pthread_exit(), called by the thread specified by
“threadid”
• Thread yield
pthread_yield ( )
– Calling thread gives up the CPU and enters the Ready
queue
Thread Programming
Thread Implementation
• Two main categories of thread implementation
– User-level Threads (ULTs)
– Kernel-level Threads (KLTs)

Pure User-Level Pure Kernel-Level CombinedLevel


ULT KLT ULT/KLT
User-Level Threads
• User-Level Threads
– Threads are managed by application, using thread
library functions
– Kernel not aware of the existence of threads, it knows
only processes with one thread of execution
• Benefit
– Light thread switching
• Does not need kernel mode privileges
• Also called “green threads” on some systems (e.g.
Solaris)
User-Level Threads
• Problem
– If one thread blocks, the entire
process is blocked, including all
other threads in it
– Only one thread can access the
kernel at a time, as the process is
the unit of execution known by
kernel
– All threads have to run on the
same processor in a
multiprocessor system
• cannot run in parallel utilising
different processors at the same
time, as process is dispatched on one
processor
Kernel-Level Threads
• Threads are managed by Kernel
– Creating / terminating threads via
system calls
• Benefit
– Fine-grain scheduling done on
thread basis
– If a thread blocks (e.g. waiting for
I/O), another one can be scheduled
by kernel without blocking the
whole process
– Threads can be distributed to
multiple processors and run in
parallel
• Problem
– Heavy thread switching involving
mode switch
Kernel Threads
• Example Systems
– Windows XP/7/8
– Solaris
– Linux
– Mac OSX
Hybrid Implementations
• Try to combine advantages of both user-level and kernel-
level threads
– User-level: light-weight thread switching
– Kernel-level: allows dispatch of other threads (same of different
process), when one threads blocks, true parallelism in
multiprocessor systems possible
• Operating system manages kernel threads and schedules
those for execution
• Mapping of user-level threads onto these kernel threads
• Different Multithreading Models:
– Many-to-one
– One-to-one
– Many-to-many
Many-to-One Model
• All user-level threads of one
process mapped to a single kernel-
level thread
• Thread management in user space
– Efficient
– Application can run its own
scheduler implementation
• One thread can access the kernel at
a time
– Limited concurrency, limited
parallelism
• Examples
– “Green threads” (e.g. Solaris)
– Gnu Portable Threads
One-to-One Model
• Each user-level thread mapped to a kernel thread
• One blocking thread does not block other threads
• Multiple threads access kernel concurrently

• Problem
– Creating a user-level thread requires creation of corresponding
kernel thread
– Kernel may restrict the number of threads created
• Example systems
– Windows, Linux, Solaris 9 (and later) implement the one-to-one
model
Many-to-Many Model
• Many user-level threads are multiplexed (mapped
dynamically) to a smaller or equal number of
kernel threads
– No fixed binding between a user and a kernel
thread
• The number of kernel threads is specific to a
particular application or computer system
– Application may be allocated more kernel threads
on a multiprocessor architecture as on a single-
processor architecture
• No restriction on user-level threads
– Applications can be designed with as many user-
level threads as needed
– Threads are then mapped dynamically onto a
smaller set of currently available kernel threads for
execution
Two-level Model
• Is a variant of the Many-to-Many model, allows a fixed
relationship between a user thread and a kernel thread
• Was used in older Unix-like systems
– IRIX, HP-UX, True64 Unix, Solaris 8
Threading Issues – fork() and exec()
• Semantics of fork() and exec() changes in a
multithreaded program
– Remember:
• fork() creates an identical copy of the calling process
– In case of a multithreaded program
• Should the new process duplicate all threads?
• Or should the new process be created with only one thread?
– If after fork(), the new process calls exec() to start a new program
within the created process image, only one thread may be
sufficient
– Solution: some Unix systems implement two versions
of fork()
Threading Issues – Thread cancellation
• Terminating a thread before it has finished
– this may depend on application
• E.g.: Web browser uses threads to load web page, but when user
selects a link to go to another web page, the loading of the current
web page can be aborted
• Two approaches
– Asynchronous cancellation terminates the target thread
immediately
– Deferred cancellation
• A cancellation request is sent to a target thread, but it continues to
execute undeterred up to a so-called cancellation point
– Particular areas in its program code where it checks for pending cancellation
requests and where a cancellation could take place safely
• If the target thread detects a cancellation request at a cancellation
point, it will end its execution, otherwise it will continue its execution
Threading Issues – Signal Handling
• Signals are used in Unix systems to notify a process
about the occurrence of an event
• All signals follow the same pattern
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Once delivered, a signal must be handled
• In multithreaded systems, there are 4 options
– Deliver the signal to the thread to which the signal
applies
– Deliver the signal to every thread in the process
– Deliver the signal to certain threads in the process
– Assign a specific thread to receive all signals for the
process
Threading Issues – Thread Pools
• Threads come with some overhead
• Unlimited thread creation may exhaust memory and CPU
• Solution
– Thread pool: create a number of threads at process startup and put
them in a pool, from where they will be allocated
– When an application needs to spawn a thread, an allocated thread is
taken from the pool and adapted to the application’s needs
• Advantage
– Usually faster to service a request with allready instantiated thread
then creating a new one
– Allows number of threads in applications to be bound by thread pool
size
• Number of pre-allocated threads in pool may depend on
– Number of CPUs, memory size
– Expected number of concurrent requests

You might also like