Assignment On Operations Research
Assignment On Operations Research
Assignment On Operations Research
ASSIGNMENT ON
OPERATIONS
RESEARCH
BY RAHUL GUPTA
Q.1: Describe in details the OR approach of problem solving. What
are the limitations of the Operations Research?
Answer:
Optimization is the act of obtaining the best result under any given circumstance. In
various practical problems we may have to take many technical or managerial decisions
at several stages. The ultimate goal of all such decisions is to either maximize the
desired benefit or minimize the effort required. We make decisions in our every day life
without even noticing them. Decision-making is one of the main activities of a manager
or executive.
In simple situations decisions are taken simply by common sense, sound judgment
and expertise without using any mathematics. But here the decisions we are concerned
with are rather complex and heavily loaded with responsibility. Examples of such
decision are finding the appropriate product mix when there are large numbers of
products with different profit contributions and production requirement or planning
public transportation network in a town having its own layout of factories, apartments,
blocks etc. Certainly in such situations also decision may be arrived at intuitively from
experience and common sense, yet they are more judicious if backed up by
mathematical reasoning.
The search of a decision may also be done by trial and error but such a search may
be cumbersome and costly. Preparative calculations may avoid long and costly research.
Doing preparative calculations is the purpose of Operations research. Operations
research does mathematical scoring of consequences of a decision with the aim of
optimizing the use of time, efforts and resources and avoiding blunders.
Since no single individual can have a thorough knowledge of all fast developing
scientific know-how, personalities from different scientific and managerial cadre form a
team to solve the problem.
The first and the most important requirement is that the root problem should be identified
and understood. The problem should be identified properly, this indicates three major
aspects:
• A description of the goal or the objective of the study.
• An identification of the decision alternative to the system.
• Recognition of the limitations, restrictions and requirements of the system.
Limitations of OR:
The limitations are more related to the problems of model building, time and money factors.
• Magnitude of computation: Modern problem involve large number of variables and
hence to find interrelationship, among makes it difficult.
• Non – quantitative factors and Human emotional factor cannot be taken into account.
• There is a wide gap between the managers and the operation researches.
• Time and Money factors when the basic data is subjected to frequent changes then
incorporation of them into OR models are a costly affair.
• Implementation of decisions involves human relations and behavior
Introduction:
More formally, given a polyhedron (for example, a polygon), and a real-valued affine
function defined on this polyhedron, a linear programming method will find a point on
the polyhedron where this function has the smallest (or largest) value if such point
exists, by searching through the polyhedron vertices.
Maximize
Subject to
Represents the vector of variables (to be determined), while and are vectors of (known)
coefficients and is a (known) matrix of coefficients. The expression to be maximized or
minimized is called the objective function ( in this case). The equations are the
constraints which specify a convex polytope over which the objective function is to be
optimized.
Uses:
Linear programming is a considerable field of optimization for several reasons. Many
practical problems in operations research can be expressed as linear programming problems.
Certain special cases of linear programming, such as network flow problems and multi
commodity flow problems are considered important enough to have generated much research on
specialized algorithms for their solution. A number of algorithms for other types of
optimization problems work by solving LP problems as sub-problems. Historically, ideas from
linear programming have inspired many of the central concepts of optimization theory, such as
duality, decomposition, and the importance of convexity and its generalizations. Likewise,
linear programming is heavily used in microeconomics and company management, such as
planning, production, transportation, technology and other issues. Although the modern
management issues are ever-changing, most companies would like to maximize profits or
minimize costs with limited resources. Therefore, many issues can boil down to linear
programming problems.
Standard form:
Standard form is the usual and most intuitive form of describing a linear programming
problem. It consists of the following three parts:
E.g. maximize
e.g.
• Non-negative variables
e.g.
Maximize
Subject to
RAHUL GUPTA, MBAHCS (2ND SEM), SUBJECT CODE-MB0032, SET-2 Page 5
ASSIGNMENT ON OPERATIONS RESEARCH
Example-
Suppose that a farmer has a piece of farm land, say A square kilometers large, to be planted
with either wheat or barley or some combination of the two. The farmer has a limited
permissible amount F of fertilizer and P of insecticide which can be used, each of which is
required in different amounts per unit area for wheat (F1, P1) and barley (F2, P2). Let S1 be the
selling price of wheat, and S2 the price of barley. If we denote the area planted with wheat and
barley by x1 and x2 respectively, then the optimal number of square kilometers to plant with
wheat vs. barley can be expressed as a linear programming problem:
Maximize
Subject to
Linear programming problems must be converted into augmented form before being solved
by the simplex algorithm. This form introduces non-negative slack variables to replace
inequalities with equalities in the constraints. The problem can then be written in the following
block matrix form.
Maximize Z in:
Where the newly introduced slack variables, and Z are is the variable to be maximized.
Example-
Where are (non-negative) slack variables, representing in this example the unused
area, the amount of unused fertilizer, and the amount of unused insecticide.
Maximize Z in:
Duality:
Every linear programming problem, referred to as a primal problem, can be converted into a
dual problem, which provides an upper bound to the optimal value of the primal problem. In
matrix form, we can express the primal problem as:
Maximize
Subject to
Minimize
Subject to
There are two ideas fundamental to duality theory. One is the fact that the dual of a dual
linear program is the original primal linear program. Additionally, every feasible solution for a
linear program gives a bound on the optimal value of the objective function of its dual. The
weak duality theorem states that the objective function value of the dual at any feasible solution
is always greater than or equal to the objective function value of the primal at any feasible
solution. The strong duality theorem states that if the primal has an optimal solution, x*, then
the dual also has an optimal solution, y*, such that cTx*=bTy*.
A linear program can also be unbounded or infeasible. Duality theory tells us that if the
primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the
dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual
and the primal to be infeasible (See also Farkas' lemma).
Example-
Revisit the above example of the farmer who may grow wheat and barley with the set
provision of some A land, F fertilizer and P insecticide. Assume now that unit prices for each
of these means of production (inputs) are set by a planning board. The planning board's job is to
minimize the total cost of procuring the set amounts of inputs while providing the farmer with a
floor on the unit price of each of his crops (outputs), S1 for wheat and S2 for barley. This
corresponds to the following linear programming problem:
Minimize
Subject to
The primal problem deals with physical quantities. With all inputs available in limited
quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to
produce so as to maximize total revenue? The dual problem deals with economic values. With
floor guarantees on all output unit prices, and assuming the available quantity of all inputs is
known, what input unit pricing scheme to set so as to minimize total expenditure? To each
variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed
by output type? To each inequality to satisfy in the primal space corresponds a variable in the
dual space, both indexed by input type? The coefficients that bound the inequalities in the
primal space are used to compute the objective in the dual space, input quantities in this
example. The coefficients used to compute the objective in the primal space bound the
inequalities in the dual space, output unit prices in this example. Both the primal and the dual
problems make use of the same matrix. In the primal space, this matrix expresses the
consumption of physical quantities of inputs necessary to produce set quantities of outputs. In
the dual space, it expresses the creation of the economic values associated with the outputs
from set input unit prices. Since each inequality can be replaced by equality and a slack
variable, this means each primal variable corresponds to a dual slack variable, and each dual
variable corresponds to a primal slack variable. This relation allows us to complementary
slackness.
Covering-Packing Dualities:
Covering-Packing Dualities
Covering problems Packing problems A covering LP is a linear program of the
Minimum Set Cover Maximum Set Packing form
Minimum Vertex Cover Maximum Matching
Minimum Edge Cover Maximum Independent Set Minimize
Subject to
Such that the matrix and the vectors and are non-negative.
Maximize
Subject to
Such that the matrix and the vectors and are non-negative.
Examples-
Complementary slackness:
It is possible to obtain an optimal solution to the dual when only an optimal solution to the
primal is known using the complementary slackness theorem. The theorem states:
Suppose that x = (x1, x2, xn) is primal feasible and that y = (y1, y2, . . . , ym) is dual feasible.
Let (w1, w2, . . ., wm) denote the corresponding primal slack variables, and let (z1, z2, . . . , zn)
denote the corresponding dual slack variables. Then x and y are optimal for their respective
problems if and only if xjzj = 0, for j = 1, 2, . . . , n, wiyi = 0, for i = 1, 2, . . . , m.
So if the ith slack variable of the primal is not zero, then the ith variable of the dual is equal
zero. Likewise, if the jth slack variable of the dual is not zero, then the jth variable of the primal
is equal to zero.
This necessary condition for optimality conveys a fairly simple economic principle. In
standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there
are "leftovers"), then additional quantities of that resource must have no value. Likewise, if
there is slack in the dual (shadow) price non-negativity constraint requirement, i.e., the price is
not zero, and then there must scarce supplies (no "leftovers").
Theory:
Geometrically, the linear constraints define a convex polytope, which is called the feasible
region. It is not hard to see that every local optimum (a point x such that for every unit direction
vector d with positive objective value any every ε > 0 it holds that x + εd is infeasible) is also a
global optimum. This holds more generally for convex programs: see the KKT theorem.
There are two situations in which no optimal solution can be found. First, if the constraints
contradict each other (for instance, x ≥ 2 and x ≤ 1) then the feasible region is empty and there
can be no optimal solution, since there are no solutions at all. In this case, the LP is said to be
infeasible. Alternatively, the polyhedron can be unbounded in the direction of the objective
function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥ 0, x1 + x2 ≥ 10), in which case
there is no optimal solution since solutions with arbitrarily high values of the objective function
can be constructed. Barring these two conditions (which can often be ruled out when dealing
with specific LPs), the optimum is always attained at a vertex of the polyhedron (unless the
polyhedron has no vertices, for example in the feasible bounded linear program
; polyhedral with at least one vertex are called pointed). However, the
optimum is not necessarily unique: it is possible to have a set of optimal solutions covering an
edge or face of the polyhedron, or even the entire polyhedron (this last situation would occur if
the objective function were constant on the polyhedron).
The vertices of the polyhedron are also called basic feasible solutions. The reason for this
choice of name is as follows. Let d denote the dimension, i.e. the number of variables. Then the
following theorem holds: for every vertex x* of the LP feasible region, there exists a set of d
inequality constraints from the LP such that, when we treat those d constraints as equalities, the
unique solution is x*. Thereby we can study these vertices by means of looking at certain
subsets of the set of all constraints (a discrete universe), rather than the continuous universe of
LP solutions. This principle underlies the simplex algorithm for solving linear programs.
The drawback of the penalty cost method is the possible computational error that could
result from assigning a very large value to the constant M. To overcome this difficulty, a
new method is considered, where the use of M is eliminated by solving the problem in two
phases. They are-
Phase I:
Formulate the new problem by eliminating the original objective function by the sum of
the artificial variables for a minimization problem and the negative of the sum of the
artificial variables for a maximization problem. The resulting objective function is
optimized by the simplex method with the constraints of the original problem. If the
problem has a feasible solution, the optimal value of the new objective function is zero
(which indicates that all artificial variables are zero). Then we proceed to phase II.
Otherwise, if the optimal value of the new objective function is non zero, the problem has
no solution and the method terminates.
Phase II :
Use the optimum solution of the phase I as the starting solution of the original problem.
Then the objective function is taken without the artificial variables and is solved by simplex
method.
Examples:
Use the two phase method to Maximize z = 3x1 – x2
Phase I is complete, since there are no negative elements in the last row. The Optimal solution
of the new objective is Z* = 0.
Phase II:
Consider the original objective function, Maximize z = 3x1 – x2 + 0S1 + 0S2 + 0S3
Subject to x1 + x2/2 – S1/2=1 5/2 x2 + S1/2 + S2=1 x2 + S3 = 4x1, x2, S1, S2, S3 ≥ 0 with the initial
solution x1 = 1, S2 = 1, S3 = 4, the corresponding simplex table is
Condition (1) The condition (1) is guaranteed by creating either a fictitious destination
with a demand equal to the surplus if total demand is less than the total supply or a (dummy)
source with a supply equal to the shortage if total demand exceeds total supply. The cost of
transportation from the fictitious destination to all sources and from all destinations to the
fictitious sources are assumed to be zero so that total cost of transportation will remain the
same.
The standard mathematical model for the transportation problem is as follows. Let xij
be number of units of the homogenous product to be transported from source i to the
destination j. Then objective is to-
Theorem:
A necessary and sufficient condition for the existence of a feasible solution to the
transportation problem (2) is that
The solution to T.P is obtained in two stages. In the first stage we find Basic feasible
solution by any one of the following methods a) North-west corner rule b) Matrix Minima
Method or least cost method c) Vogel’s approximation method. In the second stage we test the
B.Fs for its optimality either by MODI method or by stepping stone method.
• Step1:
a. The first assignment is made in the cell occupying the upper left hand (North
West) corner of the transportation table.
b. The maximum feasible amount is allocated there, that is x11 = min (a1, b1) So
that either the capacity of origin O1 is used up or the requirement at destination
D1 is satisfied or both.
c. This value of x11 is entered in the upper left hand corner (Small Square) of cell
(1, 1) in the transportation table.
• Step 2:
a. If b1 > a1 the capacity of origin O, is exhausted but the requirement at
destination D1 is still not satisfied , so that at least one more other variable in the
first column will have to take on a positive value.
b. Move down vertically to the second row and make the second allocation of
magnitude x21 = min (a2, b1 – x21) in the cell (2, 1). This either exhausts the
capacity of origin O2 or satisfies the remaining demand at destination D1.
d. This either exhausts the remaining capacity of origin O1 or satisfies the demand
at destination D2 .If b1 = a1, the origin capacity of O1 is completely exhausted
as well as the requirement at destination is completely satisfied.
e. There is a tie for second allocation; an arbitrary tie breaking choice is made.
Make the second allocation of magnitude x12 = min (a1 – a1, b2) = 0 in the cell
(1, 2) or x21 = min (a2, b1 – b2) = 0 in the cell (2, 1).
• Step 3:
a. Start from the new North West corner of the transportation table satisfying
destination requirements and exhausting the origin capacities one at a time.
b. Move down towards the lower right corner of the transportation table until all
the rim requirements are satisfied.
Answer:
The Branch And Bound Technique:
Sometimes a few or all the variables of an IPP are constrained by their upper or lower bounds or by
both. The most general technique for the solution of such constrained optimization problems is the
branch and bound technique. The technique is applicable to both all IPP as well as mixed I.P.P. the
technique for a maximization problem is discussed below: Let the I.P.P be
2 ≤ x1 ≤ U1
In one problem and
L1 ≤ x1 ≤ 1
In the other. Further each of these problems process an optimal solution satisfying
integer constraints (3) Then the solution having the larger value for z is clearly optimum
for the given I.P.P. However, it usually happens that one (or both) of these problems has
no optimal solution satisfying (3), and thus some more computations are necessary. We
now discuss step wise the algorithm that specifies how to apply the partitioning (6) and
(7) in a systematic manner to finally arrive at an optimum solution.
We start with an initial lower bound for z, say) 0(Zat the first iteration which is less than
or equal to the optimal value z*, this lower bound may be taken as the starting Lj for
some xj. In addition to the lower bound) 0(Z, we also have a list of L.P.P’s (to be called
master list) differing only in the bounds (5). To start with (the 0th iteration) the master
list contains a single L.P.P. consisting of (1), (2), (4) and (5). We now discuss below,
the step by step procedure that specifies how the partitioning (6) and (7) can be applied
systematically to eventually get an optimum integer valued solution.
Branch and Bound Algorithm
At the tth iteration (t = 0, 1, 2 …)