4. CPU Scheduling
4. CPU Scheduling
4. CPU Scheduling
4. CPU Scheduling
Basic Concepts
In a system with a single CPU core, only one process can run at a time. Others must wait until the CPU’s core is
free and can be rescheduled. The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The idea is relatively simple. A process is executed until it must wait, typically for the
completion of some I/O request. In a simple computer system, the CPU then just sits idle. All this waiting time is
wasted; no useful work is accomplished.
With multiprogramming, we try to use this time productively. Several processes are kept in memory at one time.
When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one process has to wait, another process can take over use of
the CPU.
The process scheduling is the activity of the process manager that handles the removal of the running process
from the CPU and the selection of another process on the basis of a particular strategy.
The success of CPU scheduling depends on an observed property of processes: process execution consists of a
cycle of CPU execution and I/O wait.
Processes alternate between these two states.
Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed by another CPU
burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a system request to terminate execution
Unit-4
Summary
o Maximum CPU utilization is obtained with multiprogramming
o Several processes are kept in memory at one time
o Every time a running process has to wait, another process can take over use of the CPU
o Scheduling of the CPU is fundamental to operating system design
o Process execution consists of a cycle of a CPU time burst and an I/O time burst (i.e. wait) as shown on
the next slide
o Processes alternate between these two states (i.e., CPU burst and I/O burst)
o Eventually, the final CPU burst ends with a system request to terminate execution
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be
executed. The selection process is carried out by the CPU scheduler, which selects a process from the processes
in memory that are ready to execute and allocates the CPU to that process.
The CPU scheduler selects from among the processes in memory that are ready to execute and allocates the CPU
to one of them
CPU-scheduling decisions may take place under the following four circumstances OR
CPU scheduling is affected by the following set of circumstances:
1. (N) A process switches from running to waiting state
2. (P) A process switches from running to ready state
3. (P) A process switches from waiting to ready state
4. (N) A processes switches from running to terminated state
When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is non-
preemptive,they offer no schedule choice
Circumstances 2 and 3 are pre-emptive; they can be scheduled
Pre-emptive :
If
the newly arrived process is having shorter burst time than the remaining busrst time of the currently running
process,
then
remove the currently running process and allocate the CPU to newly arrived process. And currently running
processcan be in ready queue.
Unit-4
Dispatcher
The dispatcher is the module that gives control of the CPU’s core to the process selected by the CPU scheduler.
• Jumping to the proper location in the user program to resume that program
The dispatcher should be as fast as possible, since it is invoked during every context switch.
Scheduling Criteria
Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm may favor
one class of processes over another.
In choosing which algorithm to use in a particular situation, we must consider the properties of the various
algorithms.
Many criteria have been suggested for comparing CPU-scheduling algorithms. Which characteristics are used for
comparison can make a substantial difference in which algorithm is judged to be best.
• CPU utilizationWe want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0
to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily loaded system). (CPU utilization can be obtained by using the top command on Linux, macOS, and UNIX
systems.)
• Throughput If the CPU is busy executing processes, then work is being done. One measure of work is the
number of processes that are completed per time unit, called throughput. For long processes, this rate may be one
process over several seconds; for short transactions, it may be tens of processes per second.
• Turnaround timeFrom the point of view of a particular process, the important criterion is how long it takes to
execute that process. The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting in the ready queue, executing on the
CPU, and doing I/O.
amount of time to execute a particular process from the time of submission through the time of
completion
• Waiting time The CPU-scheduling algorithm does not affect the amount of time during which a process
executes or does I/O. It affects only the amount of time that a process spends waiting in the ready queue. Waiting
time is the sum of the periods spent waiting in the ready queue.
the amount of time before a process starts after first entering the ready queue
• Response timeIn an interactive system, turnaround time may not be the best criterion. Often, a process can
produce some output fairly early and can continue computing new results while previous results are being output
to the user. Thus, another measure is the time from the submission of a request until the first response is
produced. This measure, called response time, is the time it takes to start responding, not the time it takes to
output the response.
amount of time it takes from when a request was submitted until the first response occurs
Response Time = Time at which process got the CPU first time – arrival time
Remember
Pre-emption :
If
the newly arrived process is having shorter burst time than the remaining burst time of the currently running
process,
Unit-4
then
remove the currently running process and allocate the CPU to newly arrived process. And currently running
processcan be in ready queue.
Scheduling Algorithms
By far the simplest CPU-scheduling algorithm is the first-come first-serve (FCFS) scheduling algorithm. With
this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS
policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the
tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue. The code for FCFS scheduling is simple to write and understand.
Eg. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds:
If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the result shown in the
following Gantt chart, which is a bar chart that illustrates a particular schedule, including the start and finish times
of each of the participating processes:
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
0 24 27 30
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27 milliseconds for process
P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
Eg. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds:
1 2 3 4 5 6 7 8 9 10 11 12
P1 P2 1 P3 P4
0 2 4 5 8 12
Eg. Consider the following set of processes with the length of the CPU burst given in milliseconds:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
P2 1 P1 P3 P4 P5
0 1 2 4 7 12 16
Turnaround time = Completion time – arrival time
(Turnaround time = waiting time + burst time)
Waiting time = turnaround time – burst time
Response time = CPU allocated to process first time – arrival time
Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-firs (SJF) scheduling algorithm. This algorithm
associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assigned
to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie. Note that a more appropriate term for this scheduling method would be the
shortest-next-CPU-burst algorithm, because scheduling depends on the length of the next CPU burst of a process,
rather than its total length. We use the term SJF because most people and textbooks use this term to refer to this
type of scheduling.
As an example of SJF scheduling, consider the following set of processes, with the length of the CPU burst given
in milliseconds:
(Non-preemptive)
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:
Unit-4
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
P4 P1 P3 P2
0 3 9 16 24
Eg. Consider the following set of processes with the length of the CPU burst given in milliseconds:
1 1 1 13 1 1 1
1 2 3 4 5 6 7 8 9
0 1 2 4 5 6
P4 P1 P3 P5 P2
0 6 7 8 11 16
SRTF (Pree-emptive)
Eg. Consider the following set of processes with the length of the CPU burst given in milliseconds:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
P4 P4 P1 P5 P3 P5 P4 P2
0 1 2 3 4 5 7 11 16
P P P P P P
1 2 2 2 2 4
0 1 2 3 4 5 10 17 26
P1 P2 P4 P1 P3
0 1 5 10 17 26
Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is similar to FCFS scheduling, but preemption is added to
enable the system to switch between processes.
A small unit of time, called a time quantum or time slice, is defined.
A time quantum is generally from 10 to 100 milliseconds in length.
The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of
up to 1 time quantum.
Unit-4
To implement RR scheduling, we again treat the ready queue as a FIFO queue of processes. New
processes are added to the tail of the ready queue.
The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time
quantum, and dispatches the process.
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds.
Since it requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is
given to the next process in the queue, process P2.
Process P2 does not need 4 milliseconds, so it quits before its time quantum expires.
The CPU is then given to the next process, process P3.
Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time
quantum.
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Consider the following set of processes, with the length of the CPU burst given in milliseconds
and time slice is 3ms:
Arrival Burst
Process
Time Time
P1 0 8
P2 5 2
P3 1 7
P4 6 3
P5 8 5
P1 P3 P1 P2 P4
P1 P3 P1 P2 P4 P3 P5 P1 P3 P5
0 3 6 9 11 14 17 20 22 23 25
Home work
Consider the following set of processes, with the length of the CPU burst given in milliseconds
and time slice is 2 ms:
Arrival Burst
Process
Time Time
P1 0 5
P2 1 4
P3 2 2
P4 4 1
P1 P2 P3 P1 P4 P2 P1
0 2 4 6 8 9 11 12
Unit-4
Priority Scheduling
As an example, consider the following set of processes, assumed to have arrived at time 0 in the order P1, P2, ···,
P5, with the length of the CPU burst given in milliseconds:
P2 P5 P1 P3 P4
0 1 6 16 18 19
P1 P5 P7 P2 P3 P4 P6
0 8 14 15 17 21 22 27
With both priority and round-robin scheduling, all processes may be placed in a single queue, and the
scheduler then selects the process with the highest priority to run. Depending on how the queues are
managed, an O(n) search may be necessary to determine the highest-priority process. In practice, it is
often easier to have separate queues for each distinct priority, and priority scheduling simply schedules the
process in the highest-priority queue. This is illustrated in Figure. This approach—known as multilevel
queue— also works well when priority scheduling is combined with round-robin: if there are multiple
processes in the highest-priority queue, they are executed in round-robin order. In the most generalized
form of this approach, a priority is assigned statically to each process, and a process remains in the same
queue for the duration of its runtime.
Unit-4
A multilevel queue scheduling algorithm can also be used to partition processes into several separate
queues based on the process type. For example, a common division is made between foreground
(interactive) processes and background (batch) processes. These two types of processes have different
response-time requirements and so may have different scheduling needs. In addition, foreground processes
may have priority (externally defined) over background processes. Separate queues might be used for
foreground and background processes, and each queue might have its own scheduling algorithm. The
foreground queue might be scheduled by an RR algorithm, for example, while the background queue is
scheduled by an FCFS algorithm. In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling. For example, the real-time queue may
have absolute priority over the interactive queue.
Let’s look at an example of a multilevel queue scheduling algorithm with four queues, listed below in
order of priority:
1. Real-time processes
2. System processes
3. Interactive processes
4. Batch processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example,
could run unless the queues for real-time processes, system processes, and interactive processes were all
empty. If an interactive process entered the ready queue while a batch process was running, the batch
process would be preempted.
Unit-4
Eg. Consider four processes under multilevel queue scheduling. Queue number denotes the queue of the
process. Priority of queue 1 is greater than queue 2. Queue 1 uses Round Robin (Time quantum = 2ms)
and queue 2 uses FCFS.
Gantt Chart
P1 P2 P1 P2 P3 P4 P4 P4 P3
0 2 4 6 7 10 12 14 15 20
aging prevents starvation.
Multilevel Feedback Queue Scheduling
Normally, when the multilevel queue scheduling algorithm is used, processes are permanently assigned to
a queue when they enter the system. If there are separate queues for foreground and background
processes, for example, processes do not move from one queue to the other, since processes do not change
their foreground or background nature. This setup has the advantage of low scheduling overhead, but it is
inflexible.
The multilevel feedback queue scheduling algorithm, in contrast, allows a process to move between
queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process
Unit-4
uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and
interactive processes—which are typically characterized by short CPU bursts —in the higher-priority
queues. In addition, a process that waits too long in a lower-priority queue may be moved to a higher-
priority queue. This form of aging prevents starvation.
For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2.
The scheduler first executes all processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1.
Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
A process that arrives for queue 1 will preempt a process in queue 2.
A process in queue 1 will in turn be preempted by a process arriving for queue 0.
An entering process is put in queue 0. A process in queue 0 is given a time quantum of 8 milliseconds. If it
does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the
head of queue 1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put
into queue 2. Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are
empty. To prevent starvation, a process that waits too long in a lower-priority queue may gradually be
moved to a higher-priority queue.
Unit-4
4.3 Deadlock
Deadlock is a situation where a set of processes are blocked because each process is holding a resource
and waiting for another resource acquired by some other process.
Deadlock is a situation that occurs in OS when any process enters a waiting state because another waiting
process is holding the demanded resource. Deadlock is a common problem in multi-processing where
several processes share a specific type of mutually exclusive resource known as a soft lock or software.
A deadlock happens in operating system when two or more processes need some resource to complete
their execution that is held by the other process.
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has
resource 2 and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the
other’s resource to complete their execution but neither of them is willing to relinquish their resources.
No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource voluntarily. In the
diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.
Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by the
third process and so on, till the last process is waiting for a resource held by the first process. This forms a
circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2
is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Unit-4
Deadlocks can be described more precisely in terms of a directed graph called a system resource-allocation graph.
This graph consists of a set of vertices V and a set of edges E.
The set of vertices V is partitioned into two different types of nodes:
P = {P1, P2, … Pn}, the set consisting of all the active processes in the system, and
R = {R1, R2, ..., Rm}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj, it signifies that process Pi requested
an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj→ Pi it signifies that an instance of resource
type Rj has been allocated to process Pi.
A directed edge Pi→ Rj is called a request edge; a directed edge Rj → Pi is called an assignment edge.
Pictorially, we represent each process Pi as a circle and each resource type Rj as a rectangle.
Note that a request edge points only to the rectangle Rj , whereas an assignment edge must also designate one of
the dots in the rectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the resource-allocation
graph. When this request can be fulfilled, the request edge is instantaneously transformed to an assignment edge.
When the thread no longer needs access to the resource, it releases the resource. As a result, the assignment edge
is deleted.
• Resource instances:
◦ One instance of resource type R1
◦ Two instances of resource type R2
◦ One instance of resource type R3
◦ Three instances of resource type R4
• Process states:
◦ Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type R1.
◦ Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance of R3.
◦ Process P 3 is holding an instance of R3
Given the definition of a resource-allocation graph, it can be shown that, if the graph contains no cycles, then no
process in the system is deadlocked. If the graph does contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock has occurred.
If the cycle involves only a set of resource types, each of which has only a single instance, then a deadlock has
occurred.
Each process involved in the cycle is deadlocked. In this case, a cycle in the graph is both a necessary and a
sufficient condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily imply that a deadlock has occurred.
In this case, a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock.
Suppose that Process P3 requests an instance of resource type R2. Since no resource instance is currently
available, we add a request edge P3 → R2 to the graph.
At this point, two minimal cycles exist in the system:
Unit-4
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2
Process P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by process P3.
Process P3 is waiting for either process P1 or process P2 to release resource R2. In addition, Process P1 is waiting
for Process P2 to release resource R1.
Now consider the resource-allocation graph in Figure below. In this example, we also have a cycle:
P1 → R1 → P3 → R2 → P1
There is no deadlock. Observe that process P4 may release its instance of resource type R2. That resource can
then be allocated to P3, breaking the cycle.
In summary, if a resource-allocation graph does not have a cycle, then the system is not in a deadlocked state. If
there is a cycle, then the system may or may not be in a deadlocked state.
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being used by many
operating systems mainly for end user uses. In this approach, the Operating system assumes that deadlock never
occurs. It simply ignores deadlock. This approach is best suitable for a single end user system where User uses the
system only for browsing and all other normal stuff.
Unit-4
There is always a tradeoff between Correctness and performance. The operating systems like Windows and Linux
mainly focus upon performance. However, the performance of the system decreases if it uses deadlock handling
mechanism all the time if deadlock happens 1 out of 100 times then it is completely unnecessary to use the
deadlock handling mechanism all the time.
In these types of systems, the user has to simply restart the computer in the case of deadlock. Windows and Linux
are mainly using this approach.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait holds
simultaneously. If it is possible to violate one of the four conditions at any time then the deadlock can never occur
in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe state at every
step which the operating system performs. The process continues until the system is in safe state. Once the system
moves to unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock in the system.
This approach let the processes fall in deadlock and then periodically check whether deadlock occur in the system
or not. If it occurs then it applies some of the recovery methods to the system to get rid of deadlock.
To ensure that deadlocks never occur, the system can use either deadlock prevention or a deadlock-avoidance
scheme. Deadlock prevention provides a set of methods to ensure that at least one of the necessary conditions
cannot hold.
Deadlock avoidance requires that the operating system be given additional information in advance concerning
which resources a thread will request and use during its lifetime.
Unit-4
Deadlock Prevention
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by more than one
process simultaneously which is fair enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the process would have never been waiting for
any resource.
Hold and wait condition lies when a process holds a resource and waiting for some other resource to complete its
task. Deadlock occurs because there can be more than one process which are holding one resource and waiting for
other in the cyclic order.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the resource
away from the process which is causing deadlock then we can prevent deadlock.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't request for a
lesser priority resource. This ensures that not a single process can request a resource which is being utilized by
some other process and no cycle will be formed.
Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't
cause deadlock in the system. The state of the system will continuously be checked for safe and unsafe states. The
resource allocation state of a system can be defined by the instances of available and allocated resources, and the
maximum instance of the resources demanded by the processes.
Banker’s Algorithm
Banker's algorithm is a deadlock avoidance algorithm. It is named so because this algorithm is used in banking
systems to determine whether a loan can be granted or not.
Consider there are n account holders in a bank and the sum of the money in all of their accounts is S. Everytime a
loan has to be granted by the bank, it subtracts the loan amount from the total money the bank has. Then it
checks if that difference is greater than S. It is done because, only then, the bank would have enough money even
if all the n account holders draw all their money at once.
Unit-4
Several data structures must be maintained to implement the banker’s algorithm. These data structures encode the
state of the resource-allocation system. We need the following data structures, where n is the number of processes
in the system and m is the number of resource types:
• Available
• Max
It is an n x m matrix which represents the maximum number of instances of each resource that a process can
request. If Max[i][j] = k, then the process P(i) can request at most k instances of resource type R(j).
• Allocation
It is an n x m matrix which represents the number of resources of each type currently allocated to each process.
If Allocation[i][j] = k, then process P(i) is currently allocated k instances of resource type R(j).
• Need
If Need[i][j] = k, then process P(i) may need k more instances of resource type R(j) to complete its task.
Safety Algorithm
Work = Available
This means, initially, no process has finished and the number of available resources is represented by
the Available array.
Finish[i] ==false
It means, we need to find an unfinished process whose need can be satisfied by the available resources. If
no such process exists, just go to step 4.
Finish[i] = true;
Go to step 2.
When an unfinished process is found, then the resources are allocated and the process is marked finished.
And then, the loop is repeated to check the same for all other processes.
Content of the need matrix can be calculated by using the below formula
Need = Max – Allocation
Need
Process
R1 R2 R3 R4
P0 0 0 0 0
P1 0 7 5 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
Unit-4
If Need Available
Then
Execute process
New available = Available + Allocation
Else
Do not execute, go forward
Content of the need matrix can be calculated by using the below formula
Need = Max – Allocation
1. Need of each process is compared with available. If Need ≤ Available, then resources are allocated to that
process and process will release the resource.
2. If Need is greater than available, next process need is taken for comparison.
Unit-4
Example:
Considering a system with five processes P0 through P4 and three resources of type A, B, C. Resource type A has
10 instances, B has 5 instances and type C has 7 instances. Suppose at time t0 following snapshot of the system
has been taken:
If Need Available
Then
Execute process
New available = Available + Allocation
Else
Do not execute, go forward
For P0 743332 x
For P1 122332 execute process and New Available = 3 3 2 + 2 0 0 = 5 3 2
For P2 600532 x
For P3 011532 execute process and New Available = 5 3 2 + 2 1 1 = 7 4 3
For P4 431743 execute process and New Available = 7 4 3 + 0 0 2 = 7 4 5
For P0 743745 execute process and New Available = 7 4 5 + 0 1 0 = 7 5 5
For P2 600 755 execute process and New Available = 7 5 5 + 3 0 2 = 10 5 7