THREAD Notes
THREAD Notes
THREAD Notes
An operating system uses this technique to switch a process between states to execute its
functions through CPUs. It is a process of saving the context (state) of the old process
(suspend) and loading it into the new process (resume). It occurs whenever the CPU switches
between one process and another. Basically, the state of CPU’s registers and program counter
at any time represent a context. Here, the saved state of the currently executing process
means to copy all live registers to PCB (Process Control Block). Moreover, after that, restore
the state of the process to run or execute next, which means copying live registers’ values
from PCB to registers.
A thread is also called a lightweight process.
Threads provide a way to improve application performance through parallelism.
Each thread belongs to exactly one process and no thread can exist outside a process.
There can be more than one thread inside a process. Each thread of the same process makes
use of a separate program counter and a stack of activation records and control blocks.
Need of Thread:
It takes far less time to create a new thread in an existing process than to create a new
process.
Threads can share the common data, they do not need to use Inter- Process
communication.
Context switching is faster when working with threads.
It takes less time to terminate a thread than a process.
Process vs Thread:
The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces. Threads are not independent of one another
like processes are, and as a result threads share with other threads their code section, data
section, and OS resources (like open files and signals). But, like process, a thread has its own
program counter (PC), register set, and stack space.
Thread is a single sequence stream within a process. Threads have same properties as of the
process so they are called as light weight processes. Threads are executed one after another
but gives the illusion as if they are executing in parallel. Each thread has different states.
Each thread has
1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.
Similarity between Threads and Processes –
Resources: Processes have their own address space and resources, such as memory
and file handles, whereas threads share memory and resources with the program that
created them.
Scheduling: Processes are scheduled to use the processor by the operating system,
whereas threads are scheduled to use the processor by the operating system or the
program itself.
Creation: The operating system creates and manages processes, whereas the program
or the operating system creates and manages threads.
Communication: Because processes are isolated from one another and must rely on
inter-process communication mechanisms, they generally have more difficulty
communicating with one another than threads do. Threads, on the other hand, can
interact with other threads within the same programme directly.
Threads, in general, are lighter than processes and are better suited for concurrent execution
within a single programme. Processes are commonly used to run separate programme or to
isolate resources between programs.
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked
An example of delaying of thread is snoozing of an alarm. After it rings for the first
time and is not switched off by the user, it rings again after a specific amount of time.
During that time, the thread is put to sleep.
6. When thread generates an I/O request and cannot move further till it’s done, it transitions
from RUNNING to BLOCKED queue.
7. After the process is completed, the thread transitions from RUNNING to FINISHED.
The difference between the WAITING and BLOCKED transition is that in WAITING the
thread waits for the signal from another thread or waits for another process to be completed,
meaning the burst time is specific. While, in BLOCKED state, there is no specified time (it
depends on the user when to give an input).
In order to execute all the processes successfully, the processor needs to maintain the
information about each thread through Thread Control Blocks (TCB).
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why
the throughput of the system also increases.
Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the CPU.
Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
Communication: Multiple-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads. There
is a stack and register for each thread.
Types of Threads:
1. User Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and to cause
interrupt to Kernel. Kernel doesn’t know about the user level thread and manages
them as if they were single-threaded processes.
o Advantages of ULT –
Can be implemented on an OS that doesn’t support multithreading.
Simple representation since thread has only program counter, register
set, stack space.
Simple to create since no intervention of kernel.
Thread switching is fast since no OS calls need to be made.
o Limitations of ULT –
No or less co-ordination among the threads and Kernel.
If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of
thread table in each process, the kernel itself has thread table (a master one) that keeps
track of all the threads in the system. In addition kernel also maintains the traditional
process table to keep track of the processes. OS kernel provides system call to create
and manage threads.
o Advantages of KLT –
Since kernel has full knowledge about the threads in the system,
scheduler may decide to give more time to processes having large
number of threads.
Good for applications that frequently block.
o Limitations of KLT –
Slow and inefficient.
It requires thread control block so it is an overhead.
Key Points:
1. Each ULT has a process that keeps track of the thread using the Thread table.
2. Each KLT has Thread Table (TCB) as well as the Process Table (PCB).
Advantages
Disadvantages
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system
call need not block the entire process. Multithreading models are three types
The following diagram shows the many-to-many threading model where 6 user level threads
are multiplexing with 6 kernel level threads. In this model, developers can create as many
user threads as necessary and the corresponding Kernel threads can run in parallel on a
multiprocessor machine. This model provides the best accuracy on concurrency and when a
thread performs a blocking system call, the kernel can schedule another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel at a
time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that
the system does not support them, then the Kernel threads use the many-to-one relationship
modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in
parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread
User-level threads are faster to create and Kernel-level threads are slower to create and
1
manage. manage.
User-level thread is generic and can run on Kernel-level thread is specific to the operating
3
any operating system. system.