Unit 2 Process Management- - Copy
Unit 2 Process Management- - Copy
CPU Register –
• CPU registers include general purpose register, stack pointers, index registers and accumulators
etc. number of register and type of register totally depends upon the computer architecture.
Memory management information
• This information may include the value of base and limit registers, the page
tables, or the segment tables depending on the memory system used by the
operating system.
• This information is useful for de-allocating the memory when the process
terminates.
Accounting information
• This information includes the amount of CPU and real time used, time limits, job
or process numbers, account numbers etc.
• Process control block includes CPU scheduling, I/O resource management, file
management information etc.
• The PCB serves as the repository for any information which can vary from
process to process.
• Loader/linker sets flags and registers when a process is created.
• If that process gets suspended, the contents of the registers are saved on a stack
and the pointer to the particular stack frame is stored in the PCB.
• By this technique, the hardware state can be restored so that the process can be
scheduled to run again.
CPU Switch From Process to Process
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and loaded process shares the CPU using time
multiplexing.
Maximize CPU use, quickly switch processes onto CPU for time sharing.
Process scheduler selects among available processes for next execution on CPU.
• Maintains scheduling queues of processes
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main memory, ready and
waiting to execute
• Device queues – set of processes waiting for an I/O device
• Processes migrate among the various queues
Scheduling Queues
Running
• When new process is created by Operating System that process enters into the system as in
the running state.
Non-Running
• Processes that are not running are kept in queue, waiting for their turn to execute.
• Each entry in the queue is a pointer to a particular process.
• Queue is implemented by using linked list. Use of dispatcher is as follows.
• When a process is interrupted, that process is transferred in the waiting queue.
• If the process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute
Schedulers
Producer-Consumer Problem
• There are two processes: Producer and Consumer.
• Producer produces some item and Consumer consumes that item.
• The two processes share a common space or memory location known as a buffer where the
item produced by Producer is stored and from which the Consumer consumes the item, if
needed.
• There are two versions of this problem: the first one is known as unbounded buffer problem in
which Producer can keep on producing items and there is no limit on the size of the buffer,
• the second one is known as the bounded buffer problem in which Producer can produce up to a
certain number of items before it starts waiting for Consumer to consume it.
• We will discuss the bounded buffer problem.
Bounded buffer problem
• First, the Producer and the Consumer will share some common memory, then
producer will start producing items.
• If the total produced item is equal to the size of buffer, producer will wait to get it
consumed by the Consumer.
• Similarly, the consumer will first check for the availability of the item.
• If no item is available, Consumer will wait for Producer to produce it.
• If there are items available, Consumer will consume it.
Producer Process Code
• item nextProduced;
•
• while(1){
•
• // check if there is no space
• // for production.
• // if so keep waiting.
• while((free_index+1) mod buff_max == full_index);
•
• shared_buff[free_index] = nextProduced;
• free_index = (free_index + 1) mod buff_max;
• }
Consumer Process Code
• item nextConsumed;
•
• while(1){
•
• // check if there is an available
• // item for consumption.
• // if not keep on waiting for
• // get them produced.
• while((free_index == full_index);
•
• nextConsumed = shared_buff[full_index];
• full_index = (full_index + 1) mod buff_max;
• }
• In the above code, the Producer will start producing again when the
(free_index+1) mod buff max will be free because if it it not free, this implies that
there are still items that can be consumed by the Consumer so there is no need to
produce more.
• Similarly, if free index and full index point to the same index, this implies that
there are no items to consume.
Messaging Passing Method
• Now, We will start our discussion of the communication between processes via
message passing.
• In this method, processes communicate with each other without using any kind of
shared memory.
• If two processes p1 and p2 want to communicate with each other, they proceed as
follows:
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
• send(message, destination) or send(message)
• receive(message, host) or receive(message)
• The message size can be of fixed size or of
variable size.
• If it is of fixed size, it is easy for an OS
designer but complicated for a programmer
and if it is of variable size then it is easy for
a programmer but complicated for the OS
designer.
• A standard message can have two parts: header and body.
• The header part is used for storing message type, destination id, source id,
message length, and control information.
• The control information contains information like what to do if runs out of buffer
space, sequence number, priority. Generally, message is sent using FIFO style.
Message Passing through Communication Link.
• Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler, or CPU
scheduler.
• The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for
example, as the result of an I/O request or an invocation of wait() for the
termination of a child process)
2. When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)
4. When a process terminates
• For situations 1 and 4, there is no choice in terms of scheduling.
• A new process (if one exists in the ready queue) must be selected for execution.
• There is a choice, however, for situations 2 and 3.
• When scheduling takes place only under circumstances 1 and 4, we say that the scheduling
scheme is non-preemptive or cooperative.
• Otherwise, it is preemptive.
• Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.
• This scheduling method was used by Microsoft Windows 3.x. Windows 95 introduced
preemptive scheduling, and all subsequent versions of Windows operating systems have
used preemptive scheduling.
• The Mac OS X operating system for the Macintosh also uses preemptive
scheduling; previous versions of the Macintosh operating system relied on
cooperative scheduling.
• Unfortunately, preemptive scheduling can result in race conditions when data are
shared among several processes.
• Consider the case of two processes that share data.
• While one process is updating the data, it is preempted so that the second process
can run.
• The second process then tries to read the data, which are in an inconsistent state.
• Preemption also affects the design of the operating-system kernel.
Dispatcher
CPU utilization.
• We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from
0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily loaded system).
Throughput.
• One measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be ten processes per second.
Turnaround time
• The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time
• Waiting time is the sum of the periods spent waiting in the ready queue.
Response time
• Another measure is the time from the submission of a request until the first
response is produced. This measure, called response time, is the time it takes to
start responding, not the time it takes to output the response.
Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the processes in the
ready queue is to be allocated the CPU.
• There are many different CPU-scheduling algorithms.
1. First-Come, First-Served Scheduling
2. Shortest-Job-First Scheduling
3. Priority Scheduling
4. Round-Robin Scheduling
5. Multilevel Queue Scheduling
First-Come, First-Served Scheduling
• Consider the following set of processes that arrive at time 0, with the length of the
CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
• Find out Turn around time and waiting time of each process.
Solution
• If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart, which is a bar chart that illustrates a
particular schedule, including the start and finish times of each of the participating
processes:
Gantt chart
• The waiting time is 0 milliseconds for process P1
• 24 milliseconds for process P2
• 27 milliseconds for process P3
• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
• Waiting time for P1 = 6; P2 = 0; P3 = 3 || Turn around time for P1=30, P2=3, P3=6
• Average waiting time: (6 + 0 + 3)/3 = 3 || Average TAT= (30+3+3)/3 = 36/3=12
• Much better than previous case
• There is a convoy effect as all the other processes wait for the one big
process to get off the CPU.
• This effect results in lower CPU and device utilization than might be
possible if the shorter processes were allowed to go first.
Shortest-Job-First Scheduling
• As an example, consider the following four processes, with the length of the CPU
burst given in milliseconds:
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
P1 P2 P4 P1 P3
0 1 5 10 17 26
• Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time
1.
• The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2
(4 milliseconds), so process P1 is preempted, and process P2 is scheduled.
• So Waiting Time = CPU Allocation time –Arrival time
• P1 WT=10-1=9, P2 WT=1-1=0
• P3 WT=17-2=15, P4 WT=5-3=2
• Average Waiting Time = (9+0+15+2)/4=
• 26/4= 6.5 Millisecond
P1 P2 P4 P1 P3
0 1 5 10 17 26
• P1 WT=6 P1 TAT=16
• P2 WT=0 P2 TAT=1
• P3 WT=16 P3 TAT=18
• P4 WT=18 P5 WT=1 P4 TAT=19 P5 TAT=6
• Average Waiting Time Average Turn Around Time
• (6+0+16+18+1)/5=8.2 (16+1+18+19+6)/5= 12
Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems.
• It is similar to FCFS scheduling, but preemption is added to enable the system to
switch between processes.
• A small unit of time, called a time quantum or time slice, is defined.
• A time quantum is generally from10 to 100 milliseconds in length.
• The ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.
• To implement RR scheduling, we again treat the ready queue as a FIFO queue of
processes.
• New processes are added to the tail of the ready queue.
• The CPU scheduler picks the first process from the ready queue, sets a timer to
interrupt after 1 time quantum, and dispatches the process.
• One of two things will then happen. The process may have a CPU burst of less than 1
time quantum.
• In this case, the process itself will release the CPU voluntarily.
• If the CPU burst of the currently running process is longer than 1 time quantum, the
timer will go off and will cause an interrupt to the operating system.
• A context switch will be executed, and the process will be put at the tail of the ready
queue.
• The CPU scheduler will then select the next process in he ready queue.
• The average waiting time under the RR policy is often long.
Example
• Consider the following set of processes that arrive at time 0, with the length of the
CPU burst given in milliseconds and time quantum of each process is 4
millisecond :
Process Burst Time
P1 24
P2 3
P3 3
Find waiting time.
Solution
• If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds.
• Since it requires another 20 milliseconds, it is preempted after the first time quantum, and
the CPU is given to the next process in the queue, process P2.
• Process P2 does not need 4 milliseconds, so it quits before its time quantum expires.
• The CPU is then given to the next process, process P3.
• Once each process has received 1 time quantum, the CPU is returned to process P1 for an
additional time quantum.
• The resulting RR schedule is as follows:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
• Consider the set of 5 processes whose arrival time and burst time are given below-
Solution
• Consider the set of 6 processes whose arrival time and burst time are
given below-
P1 P2 P3 P4 P5 P6 P1 P2 P5 P6 P2 P5
• WT=
• P1 WT= TAT-Burst Time 13-4=09
• P2 WT= 19-5=14
• P3
Example
• Consider the set of 5 processes whose arrival time and burst time are
given below-
Solution
Example 2
Example-SJF
Example
Multilevel Queue
• MuA process can move between the various queues; aging can be
implemented this way
• Multilevel-feedback-queue scheduler defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that process
needs service
• ltilevel Feedback Queue
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8
milliseconds
If it does not finish in 8 milliseconds,
job is moved to queue Q1
At Q1 job is again served FCFS and receives 16
additional milliseconds
If it still does not complete, it is
preempted and moved to queue Q2
Process Synchronization