Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Process-Management Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

UnitII–Process Management

Section2.1 Process
Definition:
Aprocessisbasicallyaprograminexecutionorinstanceoftheprogramexecution.Theexecuti
onofa processmust progressinasequentialfashion.
 Processis notassameasprogramcodebutalot morethanit.
 Aprocessisan'active'entityasopposedtoprogramwhichisconsideredtobea'passive'
entity.
 Attributesheldbyprocessinclude hardware state, memory,CPUetc.

Processmemoryis dividedintofoursectionsforefficientworking:
 TheTextsectionismadeupofthecompiledprogramcode,readinfromnon-
volatilestoragewhentheprogramislaunched.
 TheDatasectionismadeuptheglobalandstaticvariables,allocatedandinitializedpri
ortoexecutingthemain.
 TheHeapisusedforthedynamicmemory
allocation,andismanagedviacallstonew,delete,malloc,free,etc.
 TheStackisusedforlocalvariables.Spaceonthestackisreservedforlocalvariablesw
hentheyaredeclared.

ProcessStates:
Whenaprocessexecutes,itpassesthroughdifferentstates.Thesestagesmaydifferindifferent
operatingsystems.
Ingeneral,a process canhaveone ofthefollowingfivestatesatatime.

S.N. State&Description
1 Start:Thisistheinitialstatewhenaprocessisfirststarted/created.
Ready: The process is waiting to be assigned to a processor. Ready processes
arewaiting to have the processor allocated to them by the operating system so that
2
theycan run. Process may come into this state after Start state or while running it
by butinterruptedbythe schedulerto assign CPU to someotherprocess.
Running:Oncetheprocess hasbeenassignedtoaprocessorbythe
3
OSscheduler,theprocessstate is setto runningand theprocessor executes its
instructions.
Waiting:Processmovesintothewaitingstateifitneedstowaitforaresource,suchaswaitingfo
4
r user input, orwaitingfor afileto become available.
Terminated or Exit: Once the process finishes its execution, or it is terminated by
5 theoperating system, it is moved to the terminated state where it waits to be removed
frommainmemory.

Process State Diagram

ProcessControlBlock(PCB):
 AProcessControlBlockisadatastructuremaintainedbytheOperatingSystemforeve
ryprocess.
 ThePCBisidentifiedbyaninteger process ID(PID).
 APCBkeepsalltheinformationneededtokeeptrackofaprocessaslistedbelowintheta
ble–
S.N. Information&Description
Process State: The current state of the process i.e., whether it is ready,
1
running,waiting,orwhatever.
2 Processprivileges:Thisisrequiredtoallow/disallowaccesstosystem resources.
3 ProcessID:Uniqueidentificationforeachoftheprocessintheoperatingsystem.
4 Pointer:Apointertoparentprocess.
ProgramCounter: Program Counteris apointertotheaddressofthe nextinstructiontobe
5
executed for this process.
CPUregisters:Various CPUregisters
6
whereprocessneedtobestoredforexecutionforrunningstate.
CPUSchedulingInformation:Process
7
priorityandotherschedulinginformationwhichis required to schedule theprocess.
Memory management information: This includes the information of page
8
table,memorylimits, Segmenttabledependingon memoryused bythe operating
system.
Accountinginformation:ThisincludestheamountofCPUusedforprocessexecut
9
ion,time limits, executionID etc.
10 IOstatusinformation:ThisincludesalistofI/O devicesallocatedtothe process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB−

ProcessControlBlock(PCB)Diagram

The PCB is maintained for a process throughout its lifetime, and is deleted once the
processterminates.
Process Scheduling:

Definition:
 The process scheduling is the activity of the process manager that handles
theremoval of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
 Process scheduling is an essential part of Multiprogramming operating systems.
 Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

SchedulingQueues:
 Job queue –setofallprocessesinthesystem
 Ready queue – set of all processes residing in main memory, ready and waiting
toexecute
 Device queues – set of processes waiting for an I/O device Processes migrate
amongthevarious queues.

A new process is initially put in the Ready queue .It waits in the ready queue until it is
selected for execution (or dispatched).

Once the process is assigned to the CPU and isexecuting,one ofthe


followingseveraleventscanoccur:

The process could issue an I/O request, and then be placed in the I/O queue.
• The process could create a new sub-process and wait for its termination.

 The process could be removed forcibly from the CPU, as a result of an


interrupt, and be put back in the ready queue.

 In the first two cases, the process eventually switches from the waiting
state to the ready state, and is then put back in the ready queue.

 A process continues this cycle until it terminates, at which time it is


removed from all queues and has its PCB and resources de-allocated.

Schedulers:

 Schedulers are special system software which handles process scheduling in


various w a y s .
 Their main task is to select the jobs to be submitted into the system and to decide
which process to run.

Process Schedulersareofthreetypes–

 Long-TermScheduler
 Short-TermScheduler
 Medium-TermScheduler
 Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue.

 Short-term scheduler (or CPU scheduler) – selects which process


should be executed next andallocates CPU.

 Short-term scheduler is invoked very frequently(milliseconds)(must be fast)


 Long-term scheduler is invoked very in frequently (seconds, minutes) (may be
slow)
 The long-term scheduler controls the degree of multiprogramming
 Processes can be described as either:
o I/O-bound process–spends more time doing I/O than computations,
many short CPU bursts
o CPU-bound process–spends more time doing computations; few very
long CPU bursts.

Long Term Scheduler:

 It is also called a job scheduler.


 A long-term scheduler determines which programs are admitted to the system for
processing.
 When a process changes the state from new to ready, then the reissue of long-term
scheduler.
 It selects processes from the queue and loads them into memory for execution.
 Process loads into the memory for CPU scheduling.
 It also controls the degree of multiprogramming.
 On some systems, the long-term scheduler may not be available or minimal.
 Time-sharing operating systems have no long term scheduler.

ShortTermScheduler:

 It is also called as CPU scheduler.


 Its main objective is to increase system performance in accordance with the chosen
set of criteria.
 Itisthechangeof readystatetorunning state oftheprocess.
 CPUschedulerselectsaprocessamongtheprocessesthatarereadytoexecuteandallocates
CPU toone ofthem.
 Short-
termschedulers,alsoknownasdispatchers,makethedecisionofwhichprocesstoexecute
next.
 Short-termschedulersarefasterthanlong-termschedulers.

MediumTermScheduler:

 Medium-termschedulingisapartofswapping.
 Itremovestheprocessesfromthe memory.
 Itreducesthedegreeof multiprogramming.
 Themedium-termschedulerisin-chargeofhandlingthe swappedout-processes.
 Arunning process maybecomesuspended ifitmakesanI/Orequest.
 Asuspendedprocesscannot makeanyprogresstowardscompletion.
 Inthiscondition,toremovetheprocessfrommemoryandmakespaceforotherprocesses,th
esuspendedprocessis movedtothesecondarystorage.
 Thisprocessiscalledswapping,andtheprocessissaidtobeswappedoutorrolledout.
 Swapping maybenecessarytoimprovetheprocessmix.

ComparisonamongScheduler
S.N. Long-TermScheduler Short-TermScheduler Medium-TermScheduler
It is a process swapping
1 It Is a job scheduler It is a CPU scheduler
scheduler.
Speedisinbetweenbothshort
Speed is lesser than short Speedisfastestamongothertwo
2 andlongtermscheduler.
term scheduler
Itprovideslessercontroloverdeg
It controls the degree of Itreducesthedegreeofmultipr
3 reeofmultiprogramming
multiprogramming ogramming.
Itisalmostabsentorminimal
Itisalsominimalintimesharings ItisapartofTimesharingsyste
4 intimesharingsystem
ystem ms.
Itselectsprocessesfrompool Itselectsthoseprocesseswhichar Itcanre-
5 and loads them ereadytoexecute introducetheprocessintome
intomemoryforexecution moryandexecutioncanbecon
tinued.
Scheduling algorithms OR CPU scheduling algorithms:

 A CPU scheduling algorithm is used to determine which process will


use CPU for execution and which processes to hold or remove from
execution.
 The main goal or objective of CPU scheduling algorithms in OS is to
make sure that the CPU is never in an idle state, meaning that the OS
has at least one of the processes ready for execution among the
available processes in the ready queue.
There are mainly two types of scheduling algorithms :
Preemptive Scheduling:

Preemptive scheduling is used when a process switches from running state to


ready state or from the waiting state to the ready state.
 In these algorithms, processes are assigned with priority.
 Whenever a high-priority process comes in, the lower-priority process that has
occupied the CPU is preempted.
 That is, it releases the CPU, and the high-priority process takes the CPU for
its execution.

Non-Preemptive Scheduling : Non-Preemptive scheduling is used when a process


terminates, or when a process switches from running state to waiting state.
 In these algorithms, we cannot preempt the process.
 That is, once a process is running on the CPU, it will release it either by
context switching or terminating.
Scheduling Criteria in OS:
CPU utilization-The object of any CPU scheduling algorithm is to keep
the CPU busy if possible and to maximize its usage. In theory, the range
of CPU utilization is in the range of 0 to 100 but in real-time, it is
actually 50 to 90% which relies on the system’s load.

Throughput- It is a measure of the work that is done by the CPU which is directly
proportional to the number of processes being executed and completed per unit of
time. It keeps on varying which relies on the duration or length of processes.

The following are some important terminologies to know for understanding


the scheduling algorithms:

 Arrival Time: This is the time at which a process arrives in the ready
queue.

 Completion Time: This is the time at which a process completes its


execution.

 Burst Time: This is the time required by a process for CPU execution.

 Turn-Around Time: This is the difference in time between completion


time and arrival time. This can be calculated as:

Turn Around Time = Completion Time – Arrival Time

 Waiting Time: This is the difference in time between turnaround time and
burst time. This can be calculated as:

Waiting Time = Turn Around Time – Burst Time

 Throughput: It is the number of processes that are completing their


execution per unit of time.

 Response time (RT) − Response time is the time at which CPU has been
allocated to a particular process first time.
In case of non-preemptive scheduling, generally waiting time and Response
time is same.
 Gantt chart − Gantt chart is a visualization which helps to scheduling and
managing particular tasks in a project. It is used while solving scheduling
problems, for a concept of how the processes are being allocated in different
algorithms.
Types of CPU scheduling
Algorithms:
There are mainly six types
of process scheduling algorithms

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF)
Scheduling
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel Queue Scheduling
6. Shortest Remaining Time

First Come First Serve:


 FCFS considered to be the simplest of all operating system scheduling
algorithms.
 First come first serve scheduling algorithm states that the process that
requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.

Characteristics of FCFS:

 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.


 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is
quite high.

Advantages:

 Involves no complex logic and just picks processes from the ready queue
one by one.
 Easy to implement and understand.
 Every process will eventually get a chance to run so no starvation occurs.

Disadvantages:

 Waiting time for processes with less execution time is often very long.
 It favors CPU-bound processes then I/O processes.
 Leads to convoy effect.
 Causes lower device and CPU utilization.
 Poor performance as the average wait time is high.

Example:

Consider the given table below and find Completion time (CT), Turn-around
time (TAT), Waiting time (WT), Response time (RT), Average Turn-around time
and Average Waiting time.

Process ID Arrival time Burst time

P1 2 2

P2 5 6

P3 0 4

P4 0 7

P5 7 4

Solution:

Gantt chart

For this problem CT, TAT, WT, RT is shown in the given table −

Process ID Arrival Burst CT TAT=CT-AT WT=TAT-BT RT


time time

P1 2 2 13 13-2= 11 11-2= 9 9

P2 5 6 19 19-5= 14 14-6= 8 8

P3 0 4 4 4-0= 4 4-4= 0 0

P4 0 7 11 11-0= 11 11-7= 4 4

P5 7 4 23 23-7= 16 16-4= 12 12
Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6 time unit (time unit
can be considered as milliseconds)

Average Turn-around time = (11+14+4+11+16)/5 = 56/5 = 11.2 time unit


(time unit can be considered as milliseconds)

Problem 2:

Consider the given table below and find Completion time (CT), Turn-around
time (TAT), Waiting time (WT), Response time (RT), Average Turn-around time
and Average Waiting time.

Process ID Arrival time Burst time

P1 2 2

P2 0 1

P3 2 3

P4 3 5

P5 4 5

Solution

Gantt chart −

For this problem CT, TAT, WT, RT is shown in the given table −

Process ID Arrival time Burst time CT TAT=CT-AT WT=TAT-BT RT

P1 2 2 4 4-2= 2 2-2= 0 0
P2 0 1 1 1-0= 1 1-1= 0 0

P3 2 3 7 7-2= 5 5-3= 2 2

P4 3 5 12 12-3= 9 9-5= 4 4

P5 4 5 17 17-4= 13 13-5= 8 8

Average Waiting time = (0+0+2+4+8)/5 = 14/5 = 2.8 time unit (time unit can
be considered as milliseconds)

Average Turn-around time = (2+1+5+9+13)/5 = 30/5 = 6 time unit (time unit


can be considered as milliseconds)

*In idle (not-active) CPU period, no process is scheduled to be terminated so in


this time it remains void for a little time.

Shortest Job First scheduling:

This algorithm works on the process with the shortest burst time or duration first.

 This is the best approach to minimize waiting time.


 This is used in Batch Systems.
 It is of two types:
1. Non Pre-emptive
2. Pre-emptive
 To successfully implement it, the burst time/duration time of the processes
should be known to the processor in advance, which is practically not
feasible all the time.
 This scheduling algorithm is optimal if all the jobs/processes are available at
the same time. (either Arrival time is 0 for all, or Arrival time is same for
all)

Non Pre-emptive Shortest Job First:

 Consider the below processes available in the ready queue for execution,
with arrival time as 0 for all and given burst times.
 As you can see in the GANTT chart above, the process P4 will be
picked up first as it has the shortest burst time, then P2, followed
by P3 and at last P1.
 We scheduled the same set of processes using the First come first
serve algorithm in the previous tutorial, and got average waiting time
to be 18.75 ms, whereas with SJF, the average waiting time comes
out 4.5 ms.

Problem with Non Pre-emptive SJF:

 If the arrival time for processes are different, which means all the
processes are not available in the ready queue at time 0, and some jobs
arrive after some time, in such situation, sometimes process with short
burst time have to wait for the current process's execution to finish,
because in Non Pre-emptive SJF, on arrival of a process with short
duration, the existing job/process's execution is not halted/stopped to
execute the short job first.

 This leads to the problem of Starvation, where a shorter process has to


wait for a long time until the current longer process gets executed.

 This happens if shorter jobs keep coming, but this can be solved using
the concept of aging.
Pre-emptive Shortest Job First:

 In Pre-emptive Shortest Job First Scheduling, jobs are put into ready
queue as they arrive, but as a process with short burst time arrives, the
existing process is preempted or removed from execution, and the
shorter job is executed first.

Context Switch:
 Acontextswitchisthe mechanismtostoreandrestorethestateorcontextof aCPUin
Process Control block so that a process execution can be resumed from the
samepointatalatertime.
 Using this technique, a context switcher enables multiple processes to share a
singleCPU.
 Contextswitching isanessentialpartof amultitaskingoperating systemfeatures.
 WhentheschedulerswitchestheCPUfromexecutingoneprocesstoexecuteanother
, the state from the current running process is stored into the process
controlblock.
 After this, the state for the process to run next is loaded from its own PCB
and usedtosetthePC,registers,etc.
 Atthatpoint,the secondprocess canstart executing.

Whentheprocessisswitched, thefollowinginformationisstoredforlateruse.

 ProgramCounter
 Schedulinginformation
 Baseandlimitregistervalue
 Currentlyusedregister
 ChangedState
 I/OStateinformation
 Accountinginformation
OperationsonProcesses
ProcessCreation:
 Parentprocesscreatechildrenprocesses,which,inturncreateotherp
rocesses,formingatreeofprocesses
 Resourcesharing
o Parentandchildrenshareallresources
o Childrensharesubset ofparent’sresources
o Parent and childshare noresources
 Execution
o Parentandchildrenexecuteconcurrently
o Parentwaitsuntilchildrenterminate
 Addressspace
o Childduplicateofparent
o Childhasa programloadedintoit
 UNIXexamples
o fork systemcallcreates new process
o execsystemcallusedaftera forktoreplace theprocess’memoryspace
withanewprogram

CProgramForking SeparateProcess

intmain()
{
pid_tpid;
/* fork another
process
*/pid=fork();
if (pid< 0) { /* error
occurred
*/fprintf(stderr,
"Fork
Failed");exit(-1);
}
else if (pid == 0) { /*
child process
*/execlp("/bin/ls","ls",
NULL);
}
else{ /*parentprocess */
/* parent will wait for the child to
complete */wait(NULL);
printf ("Child
Complete");exit
(0);
}
}

ProcessTermination
 Process executeslaststatement and asksthe operatingsystemtodeleteit (exit)
o Outputdata fromchildto parent(viawait)
o Process’resourcesarede-allocatedbyoperatingsystem
 Parent mayterminateexecutionofchildren processes(abort)
o Childhasexceededallocatedresources
o Taskassignedto childisnolonger required
o Ifparentis exiting
 Someoperatingsystemdo not allowchildto
continueifitsparentterminates
 Allchildrenterminated-cascadingtermination
CooperatingProcesses

 Independentprocess cannot affectorbeaffectedbytheexecutionofanother


process
 Cooperatingprocesscanaffect or beaffectedbythe executionof another
process
 Advantagesofprocesscooperation
o Informationsharing
o Computationspeed-up
o Modularity
o Convenience

Inter-processCommunication:

 Processesexecutingconcurrentlyintheoperatingsystemmightbeeitherindependent
processesorcooperatingprocesses.
 Aprocessis independent ifit cannotbeaffectedbythe otherprocessesexecuting
inthesystem.
 InterProcessCommunication(IPC)isamechanismthatinvolvescommunicationofone
processwithanotherprocess.Thisusuallyoccursonlyinonesystem.
 Communicationcanbeoftwotypes−
 Betweenrelatedprocessesinitiatingfromonlyoneprocess,suchasparentandchildpr
ocesses.
 Betweenunrelatedprocesses,ortwoormoredifferentprocesses.
 Processescancommunicatewitheachotherusingthesetwoways:
 SharedMemory
 Messagepassing
SharedMemory
 Sharedmemoryisthememorythatcanbesimultaneouslyaccessedbymultipleprocess
es.
 Thisisdonesothatthe processescancommunicatewith eachother.
 AllPOSIXsystems,aswellasWindowsoperatingsystemsuseshared memory.
MessageQueue
 Multipleprocessescanreadandwritedatatothemessagequeuewithoutbeingconnecte
dtoeachother.
 Messagesarestoredinthequeueuntiltheirrecipientretrievesthem.
 Messagequeuesarequiteusefulforinter-
processcommunicationandareusedbymostoperatingsystems.
 Iftwoprocessesp1andp2wanttocommunicatewitheachother,theyproceedasfollow:
 Establishacommunicationlink(ifalinkalready
exists,noneedtoestablishitagain.)
 Startexchanging messagesusingbasicprimitives.
 We need at least two
primitives:send(message, destinaion) or
send(message)receive(message,host)orrecei
ve(message)
Whatis Thread?
 A thread is a flow of execution through the process code, with its own
programcounterthatkeepstrackofwhichinstructiontoexecutenext,systemregisterswhic
hholditscurrentworkingvariables,andastackwhichcontainstheexecutionhistory.
 A thread shares with its peer threads few information like code segment,
datasegmentandopenfiles.
 Whenonethreadaltersacodesegmentmemoryitem,allotherthreadsseethat.
 Athreadisalsocalled alightweightprocess.
 Threadsprovideawaytoimproveapplication performance through parallelism.
 Threadsrepresentasoftwareapproachtoimprovingperformance
ofoperatingsystembyreducingthe overheadthreadisequivalenttoaclassicalprocess.
 Eachthreadbelongstoexactlyoneprocessandnothreadcanexistoutsideaprocess.Eachthr
eadrepresentsaseparateflowofcontrol.
 Threadshavebeensuccessfullyusedinimplementingnetworkserversandwebserver.
 Theyalsoprovideasuitablefoundationforparallelexecutionofapplicationsonsharedme
morymultiprocessors.
 Thefollowingfigureshowstheworkingofasingle-threadedandamultithreadedprocess.

DifferencebetweenProcessandThread
S.N. Process Thread
Processisheavyweightorresourceint Threadislightweight,takinglesserres
1
ensive. ourcesthan a process.
Processswitchingneedsinteractionw Threadswitchingdoesnotneedtoint
2
ithoperatingsystem. eractwithoperatingsystem.
Inmultipleprocessingenvironments,e
ach process executes the same Allthreadscansharesamesetofopenfiles
3
codebut has its own memory and ,childprocesses.
fileresources.
If one process is blocked, then
Whileonethreadisblockedandwaiting,ase
4 nootherprocesscanexecuteuntilthefi
condthreadinthesame taskcanrun.
rstprocessisunblocked.
Multipleprocesseswithoutusingthre Multiplethreadedprocessesusefewerre
5
adsuse moreresources. sources.
In multiple processes each Onethreadcanread,
6
processoperatesindependentlyoftheot writeorchangeanotherthread'sdata.
hers.
AdvantagesofThread
 Threads minimizethecontextswitchingtime.
 Useofthreadsprovidesconcurrencywithinaprocess.
 Efficientcommunication.
 Itismoreeconomicaltocreateandcontextswitchthreads.
 Threads allow utilization of multiprocessor architectures to a greater scale
andefficiency.
TypesofThread
Threadsareimplementedinfollowingtwoways−
 UserLevelThreads− User managedthreads.
 Kernel Level Threads − Operating System managed threads acting on
kernel,anoperatingsystem core.
UserLevelThreads
 In this case, the thread management kernel is not aware of the existence
ofthreads.
 Thethreadlibrarycontainscodeforcreatinganddestroyingthreads,forpassingmessag
e and data between threads, for scheduling thread execution and
forsavingandrestoringthreadcontexts.
 Theapplicationstarts withasinglethread.
Advantages
 Threadswitchingdoesnotrequire Kernel modeprivileges.
 Userlevelthreadcanrunonanyoperatingsystem.
 Scheduling canbeapplicationspecificintheuserlevelthread.
 Userlevelthreadsarefasttocreateand manage.
Disadvantages
 Inatypicaloperatingsystem, mostsystemcallsareblocking.
 Multithreadedapplicationcannottakeadvantageof multiprocessing.
Kernel Level Threads
Inthiscase,threadmanagementisdonebytheKernel.Thereisnothreadmanagementcode in
the application area. Kernel threads are supported directly by the operatingsystem.
Any application can be programmed to be multithreaded. All of the
threadswithinanapplicationaresupportedwithinasingleprocess.
TheKernelmaintainscontextinformationfortheprocessasawholeandforindividualsthread
s within the process. Scheduling by the Kernel is done on a thread basis. TheKernel
performs thread creation, scheduling and management in Kernel space.
Kernelthreadsaregenerallyslowertocreateandmanagethantheuserthreads.
Advantages
 Kernelcansimultaneouslyschedulemultiplethreadsfromthesameprocessonmultipl
eprocesses.
 Ifonethreadinaprocessisblocked,theKernelcanscheduleanotherthreadofthesamepr
ocess.
 Kernelroutinesthemselvescanbemultithreaded.
Disadvantages
 Kernelthreadsaregenerallyslowertocreateandmanagethantheuserthreads.
 Transferofcontrolfromonethreadtoanotherwithinthesameprocessrequiresamode
switch tothe Kernel.
MultithreadingModels
Some operating system provide a combined user level thread and Kernel level
threadfacility. Solaris is a good example of this combined approach. In a combined
system,multiple threads within the same application can run in parallel on multiple
processorsandablockingsystemcallneednotblocktheentireprocess.Multithreadingmodel
sarethreetypes
 Manytomanyrelationship.
 Manyto onerelationship.
 Onetoone relationship.
ManytoManyModel
 Themany-to-
manymodelmultiplexesanynumberofuserthreadsontoanequalorsmallernumberofkern
elthreads.
 Thefollowingdiagramshowsthemany-to-
manythreadingmodelwhere6userlevelthreadsaremultiplexingwith6kernellevelthread
s.
 Inthismodel,developerscancreateasmanyuserthreadsasnecessaryandthecorrespondin
gKernelthreadscanruninparallelona multiprocessor machine.
 Thismodelprovidesthebestaccuracyonconcurrencyandwhenathreadperformsablocki
ng systemcall,thekernelcanscheduleanotherthreadforexecution.
Manyto OneModel
Many-to-one model maps many user level threads to one Kernel-level thread.
Threadmanagementisdoneinuserspacebythethreadlibrary.Whenthreadmakesablockings
ystem call, the entire process will be blocked. Only one thread can access the
Kernelatatime,somultiplethreadsareunabletoruninparallelon multiprocessors.
Iftheuser-levelthreadlibrariesareimplementedintheoperatingsysteminsuchawaythat the
system does not support them, then the Kernel threads use the many-to-
onerelationshipmodes.
OnetoOne Model
There is one-to-one relationship of user-level thread to the kernel-level thread.
Thismodel provides more concurrency than the many-to-one model. It also allows
anotherthread to run when a thread makes a blocking system call. It supports multiple
threadstoexecute inparallelonmicroprocessors.
Disadvantage of this model is that creating user thread requires the
correspondingKernelthread.OS/2,windowsNTandwindows2000useonetoonerelationshi
pmodel.

DifferencebetweenUser-Level &Kernel-LevelThread
S.N. User-LevelThreads Kernel-LevelThread

User- Kernel-
1
levelthreadsarefastertocreateandman levelthreadsareslowertocreateand
age. manage.
Implementationisbyathreadlibraryatthe OperatingsystemsupportscreationofK
2
userlevel. ernelthreads.

User-levelthreadis Kernel-level thread is specific to


3
genericandcanrunonanyoperatingsy theoperatingsystem.
stem.
Multi- Kernelroutinesthemselvescanbem
4
threadedapplicationscannottakeadvanta ultithreaded.
geofmultiprocessing.

You might also like