Process-Management Notes
Process-Management Notes
Process-Management Notes
Section2.1 Process
Definition:
Aprocessisbasicallyaprograminexecutionorinstanceoftheprogramexecution.Theexecuti
onofa processmust progressinasequentialfashion.
Processis notassameasprogramcodebutalot morethanit.
Aprocessisan'active'entityasopposedtoprogramwhichisconsideredtobea'passive'
entity.
Attributesheldbyprocessinclude hardware state, memory,CPUetc.
Processmemoryis dividedintofoursectionsforefficientworking:
TheTextsectionismadeupofthecompiledprogramcode,readinfromnon-
volatilestoragewhentheprogramislaunched.
TheDatasectionismadeuptheglobalandstaticvariables,allocatedandinitializedpri
ortoexecutingthemain.
TheHeapisusedforthedynamicmemory
allocation,andismanagedviacallstonew,delete,malloc,free,etc.
TheStackisusedforlocalvariables.Spaceonthestackisreservedforlocalvariablesw
hentheyaredeclared.
ProcessStates:
Whenaprocessexecutes,itpassesthroughdifferentstates.Thesestagesmaydifferindifferent
operatingsystems.
Ingeneral,a process canhaveone ofthefollowingfivestatesatatime.
S.N. State&Description
1 Start:Thisistheinitialstatewhenaprocessisfirststarted/created.
Ready: The process is waiting to be assigned to a processor. Ready processes
arewaiting to have the processor allocated to them by the operating system so that
2
theycan run. Process may come into this state after Start state or while running it
by butinterruptedbythe schedulerto assign CPU to someotherprocess.
Running:Oncetheprocess hasbeenassignedtoaprocessorbythe
3
OSscheduler,theprocessstate is setto runningand theprocessor executes its
instructions.
Waiting:Processmovesintothewaitingstateifitneedstowaitforaresource,suchaswaitingfo
4
r user input, orwaitingfor afileto become available.
Terminated or Exit: Once the process finishes its execution, or it is terminated by
5 theoperating system, it is moved to the terminated state where it waits to be removed
frommainmemory.
ProcessControlBlock(PCB):
AProcessControlBlockisadatastructuremaintainedbytheOperatingSystemforeve
ryprocess.
ThePCBisidentifiedbyaninteger process ID(PID).
APCBkeepsalltheinformationneededtokeeptrackofaprocessaslistedbelowintheta
ble–
S.N. Information&Description
Process State: The current state of the process i.e., whether it is ready,
1
running,waiting,orwhatever.
2 Processprivileges:Thisisrequiredtoallow/disallowaccesstosystem resources.
3 ProcessID:Uniqueidentificationforeachoftheprocessintheoperatingsystem.
4 Pointer:Apointertoparentprocess.
ProgramCounter: Program Counteris apointertotheaddressofthe nextinstructiontobe
5
executed for this process.
CPUregisters:Various CPUregisters
6
whereprocessneedtobestoredforexecutionforrunningstate.
CPUSchedulingInformation:Process
7
priorityandotherschedulinginformationwhichis required to schedule theprocess.
Memory management information: This includes the information of page
8
table,memorylimits, Segmenttabledependingon memoryused bythe operating
system.
Accountinginformation:ThisincludestheamountofCPUusedforprocessexecut
9
ion,time limits, executionID etc.
10 IOstatusinformation:ThisincludesalistofI/O devicesallocatedtothe process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB−
ProcessControlBlock(PCB)Diagram
The PCB is maintained for a process throughout its lifetime, and is deleted once the
processterminates.
Process Scheduling:
Definition:
The process scheduling is the activity of the process manager that handles
theremoval of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
SchedulingQueues:
Job queue –setofallprocessesinthesystem
Ready queue – set of all processes residing in main memory, ready and waiting
toexecute
Device queues – set of processes waiting for an I/O device Processes migrate
amongthevarious queues.
A new process is initially put in the Ready queue .It waits in the ready queue until it is
selected for execution (or dispatched).
The process could issue an I/O request, and then be placed in the I/O queue.
• The process could create a new sub-process and wait for its termination.
In the first two cases, the process eventually switches from the waiting
state to the ready state, and is then put back in the ready queue.
Schedulers:
Process Schedulersareofthreetypes–
Long-TermScheduler
Short-TermScheduler
Medium-TermScheduler
Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue.
ShortTermScheduler:
MediumTermScheduler:
Medium-termschedulingisapartofswapping.
Itremovestheprocessesfromthe memory.
Itreducesthedegreeof multiprogramming.
Themedium-termschedulerisin-chargeofhandlingthe swappedout-processes.
Arunning process maybecomesuspended ifitmakesanI/Orequest.
Asuspendedprocesscannot makeanyprogresstowardscompletion.
Inthiscondition,toremovetheprocessfrommemoryandmakespaceforotherprocesses,th
esuspendedprocessis movedtothesecondarystorage.
Thisprocessiscalledswapping,andtheprocessissaidtobeswappedoutorrolledout.
Swapping maybenecessarytoimprovetheprocessmix.
ComparisonamongScheduler
S.N. Long-TermScheduler Short-TermScheduler Medium-TermScheduler
It is a process swapping
1 It Is a job scheduler It is a CPU scheduler
scheduler.
Speedisinbetweenbothshort
Speed is lesser than short Speedisfastestamongothertwo
2 andlongtermscheduler.
term scheduler
Itprovideslessercontroloverdeg
It controls the degree of Itreducesthedegreeofmultipr
3 reeofmultiprogramming
multiprogramming ogramming.
Itisalmostabsentorminimal
Itisalsominimalintimesharings ItisapartofTimesharingsyste
4 intimesharingsystem
ystem ms.
Itselectsprocessesfrompool Itselectsthoseprocesseswhichar Itcanre-
5 and loads them ereadytoexecute introducetheprocessintome
intomemoryforexecution moryandexecutioncanbecon
tinued.
Scheduling algorithms OR CPU scheduling algorithms:
Throughput- It is a measure of the work that is done by the CPU which is directly
proportional to the number of processes being executed and completed per unit of
time. It keeps on varying which relies on the duration or length of processes.
Arrival Time: This is the time at which a process arrives in the ready
queue.
Burst Time: This is the time required by a process for CPU execution.
Waiting Time: This is the difference in time between turnaround time and
burst time. This can be calculated as:
Response time (RT) − Response time is the time at which CPU has been
allocated to a particular process first time.
In case of non-preemptive scheduling, generally waiting time and Response
time is same.
Gantt chart − Gantt chart is a visualization which helps to scheduling and
managing particular tasks in a project. It is used while solving scheduling
problems, for a concept of how the processes are being allocated in different
algorithms.
Types of CPU scheduling
Algorithms:
There are mainly six types
of process scheduling algorithms
Characteristics of FCFS:
Advantages:
Involves no complex logic and just picks processes from the ready queue
one by one.
Easy to implement and understand.
Every process will eventually get a chance to run so no starvation occurs.
Disadvantages:
Waiting time for processes with less execution time is often very long.
It favors CPU-bound processes then I/O processes.
Leads to convoy effect.
Causes lower device and CPU utilization.
Poor performance as the average wait time is high.
Example:
Consider the given table below and find Completion time (CT), Turn-around
time (TAT), Waiting time (WT), Response time (RT), Average Turn-around time
and Average Waiting time.
P1 2 2
P2 5 6
P3 0 4
P4 0 7
P5 7 4
Solution:
Gantt chart
For this problem CT, TAT, WT, RT is shown in the given table −
P1 2 2 13 13-2= 11 11-2= 9 9
P2 5 6 19 19-5= 14 14-6= 8 8
P3 0 4 4 4-0= 4 4-4= 0 0
P4 0 7 11 11-0= 11 11-7= 4 4
P5 7 4 23 23-7= 16 16-4= 12 12
Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6 time unit (time unit
can be considered as milliseconds)
Problem 2:
Consider the given table below and find Completion time (CT), Turn-around
time (TAT), Waiting time (WT), Response time (RT), Average Turn-around time
and Average Waiting time.
P1 2 2
P2 0 1
P3 2 3
P4 3 5
P5 4 5
Solution
Gantt chart −
For this problem CT, TAT, WT, RT is shown in the given table −
P1 2 2 4 4-2= 2 2-2= 0 0
P2 0 1 1 1-0= 1 1-1= 0 0
P3 2 3 7 7-2= 5 5-3= 2 2
P4 3 5 12 12-3= 9 9-5= 4 4
P5 4 5 17 17-4= 13 13-5= 8 8
Average Waiting time = (0+0+2+4+8)/5 = 14/5 = 2.8 time unit (time unit can
be considered as milliseconds)
This algorithm works on the process with the shortest burst time or duration first.
Consider the below processes available in the ready queue for execution,
with arrival time as 0 for all and given burst times.
As you can see in the GANTT chart above, the process P4 will be
picked up first as it has the shortest burst time, then P2, followed
by P3 and at last P1.
We scheduled the same set of processes using the First come first
serve algorithm in the previous tutorial, and got average waiting time
to be 18.75 ms, whereas with SJF, the average waiting time comes
out 4.5 ms.
If the arrival time for processes are different, which means all the
processes are not available in the ready queue at time 0, and some jobs
arrive after some time, in such situation, sometimes process with short
burst time have to wait for the current process's execution to finish,
because in Non Pre-emptive SJF, on arrival of a process with short
duration, the existing job/process's execution is not halted/stopped to
execute the short job first.
This happens if shorter jobs keep coming, but this can be solved using
the concept of aging.
Pre-emptive Shortest Job First:
In Pre-emptive Shortest Job First Scheduling, jobs are put into ready
queue as they arrive, but as a process with short burst time arrives, the
existing process is preempted or removed from execution, and the
shorter job is executed first.
Context Switch:
Acontextswitchisthe mechanismtostoreandrestorethestateorcontextof aCPUin
Process Control block so that a process execution can be resumed from the
samepointatalatertime.
Using this technique, a context switcher enables multiple processes to share a
singleCPU.
Contextswitching isanessentialpartof amultitaskingoperating systemfeatures.
WhentheschedulerswitchestheCPUfromexecutingoneprocesstoexecuteanother
, the state from the current running process is stored into the process
controlblock.
After this, the state for the process to run next is loaded from its own PCB
and usedtosetthePC,registers,etc.
Atthatpoint,the secondprocess canstart executing.
Whentheprocessisswitched, thefollowinginformationisstoredforlateruse.
ProgramCounter
Schedulinginformation
Baseandlimitregistervalue
Currentlyusedregister
ChangedState
I/OStateinformation
Accountinginformation
OperationsonProcesses
ProcessCreation:
Parentprocesscreatechildrenprocesses,which,inturncreateotherp
rocesses,formingatreeofprocesses
Resourcesharing
o Parentandchildrenshareallresources
o Childrensharesubset ofparent’sresources
o Parent and childshare noresources
Execution
o Parentandchildrenexecuteconcurrently
o Parentwaitsuntilchildrenterminate
Addressspace
o Childduplicateofparent
o Childhasa programloadedintoit
UNIXexamples
o fork systemcallcreates new process
o execsystemcallusedaftera forktoreplace theprocess’memoryspace
withanewprogram
CProgramForking SeparateProcess
intmain()
{
pid_tpid;
/* fork another
process
*/pid=fork();
if (pid< 0) { /* error
occurred
*/fprintf(stderr,
"Fork
Failed");exit(-1);
}
else if (pid == 0) { /*
child process
*/execlp("/bin/ls","ls",
NULL);
}
else{ /*parentprocess */
/* parent will wait for the child to
complete */wait(NULL);
printf ("Child
Complete");exit
(0);
}
}
ProcessTermination
Process executeslaststatement and asksthe operatingsystemtodeleteit (exit)
o Outputdata fromchildto parent(viawait)
o Process’resourcesarede-allocatedbyoperatingsystem
Parent mayterminateexecutionofchildren processes(abort)
o Childhasexceededallocatedresources
o Taskassignedto childisnolonger required
o Ifparentis exiting
Someoperatingsystemdo not allowchildto
continueifitsparentterminates
Allchildrenterminated-cascadingtermination
CooperatingProcesses
Inter-processCommunication:
Processesexecutingconcurrentlyintheoperatingsystemmightbeeitherindependent
processesorcooperatingprocesses.
Aprocessis independent ifit cannotbeaffectedbythe otherprocessesexecuting
inthesystem.
InterProcessCommunication(IPC)isamechanismthatinvolvescommunicationofone
processwithanotherprocess.Thisusuallyoccursonlyinonesystem.
Communicationcanbeoftwotypes−
Betweenrelatedprocessesinitiatingfromonlyoneprocess,suchasparentandchildpr
ocesses.
Betweenunrelatedprocesses,ortwoormoredifferentprocesses.
Processescancommunicatewitheachotherusingthesetwoways:
SharedMemory
Messagepassing
SharedMemory
Sharedmemoryisthememorythatcanbesimultaneouslyaccessedbymultipleprocess
es.
Thisisdonesothatthe processescancommunicatewith eachother.
AllPOSIXsystems,aswellasWindowsoperatingsystemsuseshared memory.
MessageQueue
Multipleprocessescanreadandwritedatatothemessagequeuewithoutbeingconnecte
dtoeachother.
Messagesarestoredinthequeueuntiltheirrecipientretrievesthem.
Messagequeuesarequiteusefulforinter-
processcommunicationandareusedbymostoperatingsystems.
Iftwoprocessesp1andp2wanttocommunicatewitheachother,theyproceedasfollow:
Establishacommunicationlink(ifalinkalready
exists,noneedtoestablishitagain.)
Startexchanging messagesusingbasicprimitives.
We need at least two
primitives:send(message, destinaion) or
send(message)receive(message,host)orrecei
ve(message)
Whatis Thread?
A thread is a flow of execution through the process code, with its own
programcounterthatkeepstrackofwhichinstructiontoexecutenext,systemregisterswhic
hholditscurrentworkingvariables,andastackwhichcontainstheexecutionhistory.
A thread shares with its peer threads few information like code segment,
datasegmentandopenfiles.
Whenonethreadaltersacodesegmentmemoryitem,allotherthreadsseethat.
Athreadisalsocalled alightweightprocess.
Threadsprovideawaytoimproveapplication performance through parallelism.
Threadsrepresentasoftwareapproachtoimprovingperformance
ofoperatingsystembyreducingthe overheadthreadisequivalenttoaclassicalprocess.
Eachthreadbelongstoexactlyoneprocessandnothreadcanexistoutsideaprocess.Eachthr
eadrepresentsaseparateflowofcontrol.
Threadshavebeensuccessfullyusedinimplementingnetworkserversandwebserver.
Theyalsoprovideasuitablefoundationforparallelexecutionofapplicationsonsharedme
morymultiprocessors.
Thefollowingfigureshowstheworkingofasingle-threadedandamultithreadedprocess.
DifferencebetweenProcessandThread
S.N. Process Thread
Processisheavyweightorresourceint Threadislightweight,takinglesserres
1
ensive. ourcesthan a process.
Processswitchingneedsinteractionw Threadswitchingdoesnotneedtoint
2
ithoperatingsystem. eractwithoperatingsystem.
Inmultipleprocessingenvironments,e
ach process executes the same Allthreadscansharesamesetofopenfiles
3
codebut has its own memory and ,childprocesses.
fileresources.
If one process is blocked, then
Whileonethreadisblockedandwaiting,ase
4 nootherprocesscanexecuteuntilthefi
condthreadinthesame taskcanrun.
rstprocessisunblocked.
Multipleprocesseswithoutusingthre Multiplethreadedprocessesusefewerre
5
adsuse moreresources. sources.
In multiple processes each Onethreadcanread,
6
processoperatesindependentlyoftheot writeorchangeanotherthread'sdata.
hers.
AdvantagesofThread
Threads minimizethecontextswitchingtime.
Useofthreadsprovidesconcurrencywithinaprocess.
Efficientcommunication.
Itismoreeconomicaltocreateandcontextswitchthreads.
Threads allow utilization of multiprocessor architectures to a greater scale
andefficiency.
TypesofThread
Threadsareimplementedinfollowingtwoways−
UserLevelThreads− User managedthreads.
Kernel Level Threads − Operating System managed threads acting on
kernel,anoperatingsystem core.
UserLevelThreads
In this case, the thread management kernel is not aware of the existence
ofthreads.
Thethreadlibrarycontainscodeforcreatinganddestroyingthreads,forpassingmessag
e and data between threads, for scheduling thread execution and
forsavingandrestoringthreadcontexts.
Theapplicationstarts withasinglethread.
Advantages
Threadswitchingdoesnotrequire Kernel modeprivileges.
Userlevelthreadcanrunonanyoperatingsystem.
Scheduling canbeapplicationspecificintheuserlevelthread.
Userlevelthreadsarefasttocreateand manage.
Disadvantages
Inatypicaloperatingsystem, mostsystemcallsareblocking.
Multithreadedapplicationcannottakeadvantageof multiprocessing.
Kernel Level Threads
Inthiscase,threadmanagementisdonebytheKernel.Thereisnothreadmanagementcode in
the application area. Kernel threads are supported directly by the operatingsystem.
Any application can be programmed to be multithreaded. All of the
threadswithinanapplicationaresupportedwithinasingleprocess.
TheKernelmaintainscontextinformationfortheprocessasawholeandforindividualsthread
s within the process. Scheduling by the Kernel is done on a thread basis. TheKernel
performs thread creation, scheduling and management in Kernel space.
Kernelthreadsaregenerallyslowertocreateandmanagethantheuserthreads.
Advantages
Kernelcansimultaneouslyschedulemultiplethreadsfromthesameprocessonmultipl
eprocesses.
Ifonethreadinaprocessisblocked,theKernelcanscheduleanotherthreadofthesamepr
ocess.
Kernelroutinesthemselvescanbemultithreaded.
Disadvantages
Kernelthreadsaregenerallyslowertocreateandmanagethantheuserthreads.
Transferofcontrolfromonethreadtoanotherwithinthesameprocessrequiresamode
switch tothe Kernel.
MultithreadingModels
Some operating system provide a combined user level thread and Kernel level
threadfacility. Solaris is a good example of this combined approach. In a combined
system,multiple threads within the same application can run in parallel on multiple
processorsandablockingsystemcallneednotblocktheentireprocess.Multithreadingmodel
sarethreetypes
Manytomanyrelationship.
Manyto onerelationship.
Onetoone relationship.
ManytoManyModel
Themany-to-
manymodelmultiplexesanynumberofuserthreadsontoanequalorsmallernumberofkern
elthreads.
Thefollowingdiagramshowsthemany-to-
manythreadingmodelwhere6userlevelthreadsaremultiplexingwith6kernellevelthread
s.
Inthismodel,developerscancreateasmanyuserthreadsasnecessaryandthecorrespondin
gKernelthreadscanruninparallelona multiprocessor machine.
Thismodelprovidesthebestaccuracyonconcurrencyandwhenathreadperformsablocki
ng systemcall,thekernelcanscheduleanotherthreadforexecution.
Manyto OneModel
Many-to-one model maps many user level threads to one Kernel-level thread.
Threadmanagementisdoneinuserspacebythethreadlibrary.Whenthreadmakesablockings
ystem call, the entire process will be blocked. Only one thread can access the
Kernelatatime,somultiplethreadsareunabletoruninparallelon multiprocessors.
Iftheuser-levelthreadlibrariesareimplementedintheoperatingsysteminsuchawaythat the
system does not support them, then the Kernel threads use the many-to-
onerelationshipmodes.
OnetoOne Model
There is one-to-one relationship of user-level thread to the kernel-level thread.
Thismodel provides more concurrency than the many-to-one model. It also allows
anotherthread to run when a thread makes a blocking system call. It supports multiple
threadstoexecute inparallelonmicroprocessors.
Disadvantage of this model is that creating user thread requires the
correspondingKernelthread.OS/2,windowsNTandwindows2000useonetoonerelationshi
pmodel.
DifferencebetweenUser-Level &Kernel-LevelThread
S.N. User-LevelThreads Kernel-LevelThread
User- Kernel-
1
levelthreadsarefastertocreateandman levelthreadsareslowertocreateand
age. manage.
Implementationisbyathreadlibraryatthe OperatingsystemsupportscreationofK
2
userlevel. ernelthreads.