Multi Threading Models
Multi Threading Models
Multithreading models
There are three dominant models for
thread libraries, each with its own
trade-offs
many threads on one LWP (many-to-one)
one thread per LWP (one-to-one)
many threads on many LWPs (many-tomany)
Many-to-one
In this model, the library maps all
threads to a single lightweight process
Advantages:
totally portable
easy to do with few systems
dependencies
Disadvantages:
cannot take advantage of parallelism
may have to block for synchronous
I/O
there is a clever technique for
avoiding it
Mainly used in language systems,
portable libraries
One-to-one
In this model, the library maps each
thread to a different lightweight
process
Advantages:
can exploit parallelism, blocking
system calls
Disadvantages:
thread creation involves LWP
creation
each thread takes up kernel
resources
limiting the number of total
threads
Used in LinuxThreads and other
systems where LWP creation is not
too expensive
Many-to-many
In this model, the library has
two kinds of threads: bound
and unbound
bound threads are mapped
each to a single
lightweight process
unbound threads may be
mapped to the same LWP
Probably the best of both
worlds
Used in the Solaris
implementation of Pthreads
(and several other Unix
implementations)
Pipeline model
do some work, pass partial result to next thread
Up-calls
fast control flow transfer for layered systems
Version stamps
technique for keeping information consistent
Boss/Workers
Boss: Worker:
forever {
taskX();
get a request
switch(request)
case X: Fork (taskX)
case Y: Fork (taskY)
Advantage: simplicity
Disadvantage: bound on number of workers, overheard of
threads creation, contention if requests have interdependencies
Variants: fixed thread pool (aka workpile, workqueue),
producer/consumer relationship, workers determine what needs
to be performed
Pipeline
Each thread completes portion of a
task, and passes results
like an assembly line or a processor
pipeline
Advantages: trivial synchronization,
simplicity
Disadvantages: limits degree of
parallelism, throughput driven by
slowest stage, handtuning needed
Up-calls
Layered applications, e.g. network protocol stacks
have top-down and bottom-up flows
Up-calls is a technique in which you structure layers so
that they can expect calls from below
Thread pool of specialized threads in each layer
essentially an up-call pipeline per connection
Version Stamps
(Not a programming structure idea
but useful technique for any kind
of distributed environment)
Maintain version number for
shared data
keep local cached copy of data
check versions to determine if
changed