Threads in Operating System
Threads in Operating System
Threads in Operating System
The process can be split down into so many threads. For example, in a browser, many tabs
can be viewed as threads. MS Word uses many threads - formatting text from one thread,
processing input from another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as if
they are single-threaded processes examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini
thread control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads,
and each thread is treated as a job, the number of jobs done in the unit time increases.
That is why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
o Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive
communication strategies for communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process,
such as code, data, and files. Note: The stack and register cannot be shared between
threads. There is a stack and register for each thread.
Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In multi-
threads, the same process or task can be done by the number of threads, or we can say that
there is more than one thread to perform the task in multithreading. With the use of
multithreading, multitasking can be achieved.
The main drawback of single threading systems is that only one task can be performed at a
time, so to overcome the drawback of this single threading, there is multithreading that allows
multiple tasks to be performed.
For example:
In the above example, client1, client2, and client3 are accessing the web server without any
waiting. In multithreading, several tasks can run at the same time.
In an operating system, threads are divided into the user-level thread and the Kernel-level
thread. User-level threads handled independent form above the kernel and thereby managed
without any kernel support. On the opposite hand, the operating system directly manages the
kernel-level threads. Nevertheless, there must be a form of relationship between user-level
and kernel-level threads.
There exists three established multithreading models classifying these relationships are:
The disadvantage of this model is that since there is only one kernel-level thread schedule at
any given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi-processor systems. In this, all the thread management is
done in the userspace. If blocking comes, this model blocks the whole system.
In the above figure, the many to one model associates all user-level threads to single kernel-
level threads.
In the above figure, one model associates that one user-level thread to a single kernel-level
thread.
Many to many versions of the multithreading model associate several user-level threads to
the same or much less variety of kernel-level threads in the above figure.
Threading issues:
We can discuss some of the issues to consider in designing multithreaded programs. These issued
are as follows −
The fork() is used to create a duplicate process. The meaning of the fork() and exec() system calls
change in a multithreaded program.
If one thread in a program which calls fork(), does the new process duplicate all threads, or is the
new process single-threaded? If we take, some UNIX systems have chosen to have two versions
of fork(), one that duplicates all threads and another that duplicates only the thread that invoked
the fork() system call.
If a thread calls the exec() system call, the program specified in the parameter to exec() will
replace the entire process which includes all threads.
Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular event has occurred.
A signal received either synchronously or asynchronously, based on the source of and the reason
for the event being signalled.
All signals, whether synchronous or asynchronous, follow the same pattern as given below −
A signal is generated by the occurrence of a particular event.
The signal is delivered to a process.
Once delivered, the signal must be handled.
Cancellation
For example − If multiple database threads are concurrently searching through a database and
one thread returns the result the remaining threads might be cancelled.
A target thread is a thread that is to be cancelled, cancellation of target thread may occur in two
different scenarios −
Thread polls
Multithreading in a web server, whenever the server receives a request it creates a separate thread
to service the request.
The amount of time required to create the thread prior to serving the request together with
the fact that this thread will be discarded once it has completed its work.
If all concurrent requests are allowed to be serviced in a new thread, there is no bound on
the number of threads concurrently active in the system.
Unlimited thread could exhaust system resources like CPU time or memory.
A thread pool is to create a number of threads at process start-up and place them into a pool,
where they sit and wait for work.