Linear Programmng
Linear Programmng
Sarita Bandyopadhyay
(Registration No. A01-2112-0577-21)
Under the Guidance of Dr Sourav Tarafder
WHAT IS LINEAR PROGRAMMING?
The word linear refers to linear relationship among variables in a model. Thus, a given change in
one variable will always cause a resulting proportional change in another variable. For example,
doubling the investment on a certain project will exactly double the rate of the return. The word
programming refers to modelling and solving a problem mathematically that involves the
economic allocation of limited resources by choosing a particular course of action or strategy
among various alternative strategies to achieve the desired objective.
Linear programming (also known as LPP), in simple words, is a way of achieving the best
outcome, such as maximum profit or minimum cost, using a mathematical model represented by
linear relationships. It is also known as ‘linear optimization’. Optimization is all about making
things better; this could mean helping a company make better decisions to maximize profit;
helping a factory make products with less environmental impact or helping a zoologist to
improve the diet of an animal. When we talk about optimization, we often use terms like better or
improvement. It is important to remember that words like better can mean more of something (as
in the case of profit) or less of something (as in the case of waste).
HISTORY
The LPP technique was first introduced in 1930 by Russian mathematician Leonid Kantorovich in the
field of manufacturing schedules and by American economist Wassily Leontief in the field of economics.
After a decade during World War II, these techniques were heavily adopted to solve problems related to
transportation, scheduling, allocation of resources, etc. He developed the earliest linear programming
problems that were used by the army during WWII in order to reduce the costs of the army and increase
the efficiency in the battlefield. The method was a secret because of its use in war-time strategies, until
1947 when George B. Dantzig published the simplex method and John von Neuman developed the
theory of duality.
In 1950, the first simplex method algorithm for LPP was created by American mathematician George
Dantzig. Dantzig's original linear programming example was to find the best assignment of 70 people to
70 jobs. In order to select the best assignment requires a lot of computing power; the number of
possible configurations exceeds the number of particles in the observable universe. However, by posing
the problem as a linear program and applying the simplex algorithm, it takes only a moment to find the
optimum solution. The theory behind linear programming drastically reduces the number of possible
optimal solutions that must be checked. In contrast to the economists of his time, Dantzig viewed LPP
not just as a qualitative tool in the analysis of economic phenomena, but as a method that could be used
to compute actual answers to specific real-world problems. Consistent with that view, he proposed an
algorithm for solving LPPs, the simplex algorithm. To this day the simplex algorithm remains a primary
computational tool in linear programming.
COMPONENTS OF LPP
Objective function is part of a linear programming optimization strategy, which finds the
minimum or maximum of a linear function. As the name suggests, the objective function
basically sets the objective of the problem. It focuses on decision-making based on constraints.
It is a real-valued function that is either to be maximized or minimized depending upon the
constraints.
Constraints: They are basically the conditional equations that govern the Linear function
Decision Variables: The variables whose values are to be found out. The equations are solved
so as to get the optimal value of these variables.
Feasible Region: It is the region in the graph where the constraints are satisfied and the
decision variables are found at the corners of the region.
Optimum Solution: The best possible solution that satisfies all constraints and achieves the
highest or lowest objective.
Infeasible Solution: A solution that violates one or more constraints and cannot be
implemented or executed.
Example to classify them
Optimize (Maximize/Minimize) z= c1x1 + c2x2 +……+ cnxn
subject to
a11x1 + a12x2 +……+ a1n xn (≤, =, ≥) b1
a21x1 + a22x2 +……+ a2n xn (≤, =, ≥) b2
……………………………………
am1x1 + am2x2 +……+ amn xn (≤, =, ≥) bm
x1, x2, ………….., xm ≥ 0.
The equation z = c1x1 + c2x2 +……+ cnxn is called the objective function.
The variables x1, x2, ………….., xm are called decision variables
the variables c1, c2,..,cn are called cost co-efficients.
x1 ≥ 0, x2 ≥ 0, ……, xm ≥ 0 are called non-negativity constraints.
DEFINITION OF TERMS
LINEAR DEPENDENCE AND INDEPENDENCE: It is used to determine the which column is to be
chosen while solving an LPP using analytical method.
RANK OF A MATRIX: Suppose we are to solve n-equations consisting of m-variables (or vice-versa)
where m<n (analytical method to solve an LPP). In order to proceed to solve them, we need to choose
m-equations out of those n-equations. Hence, after writing the co-efficients in the form a matrix, rank
of the m×n matrix gives us the idea of the linearly independent or dependent rows, i.e., when finding
the rank by row-reduced echlon form (solving an matrix by row-reduced echlon form is mentioned in
Pg 8 of my thesis). Generally, rank of the co-efficient matrix is m, but if < m, the all equations are not
linearly independent.
CONVEX SETS: A convex set is a collection of points in which the line AB connecting any two points
A, B in the set lies completely within the set.
EXTREME POINTS: Let X be a convex set and x ∈ X. x is called an extreme point of X if x cannot
be expressed as a strict convex combination of two distinct points of X.
DIRECTIONS AND EXTREME DIRECTIONS: Given a convex set X, a non-zero vector d is called
direction of X, iff for each x0 ϵ X, all the points of the ray {x0 + c d : c ≥ 0} with vertex at x0 belong to
X. An extreme direction of a convex set is a direction of the set that cannot be represented as a positive
combination of two distinct directions.
GEOMETRICAL INTERPRETATION OF
A MATRIX
MATRICES: The matrices are a 2-dimensional set of numbers or symbols distributed in a
rectangular shape in vertical and horizontal lines so that their elements are arranged in rows
and columns. Matrix refers to an ordered rectangular arrangement of numbers which are either
real or complex or functions. We enclose Matrix by [ ] or ( ).
GEOMETRICAL INTERPRETATION: Let us see the vectors (specifically column vectors) as
co-ordinates in a n-dimensional vector space. A column vector places the co-ordinates as a
column. A matrix is essentially a set of column-vectors.
Let M be an invertible n×n matrix.
M=
Suppose we interpret each column or row (we can take anyone the matrix is n×n) as a point in R n,
then these n-points define a unique hyperplane in R n that passes through each point. (Note- This
hyperplane does not intersect the origin)
GEOMETRICAL INTERPRETATION OF
A MATRIX INVERSE
MATRIX INVERSE: Just like a number has its reciprocal, even a matrix has an inverse. If we consider a matrix
A, we denote its inverse as A-1. The inverse of a matrix is another matrix that, when multiplied by the given
matrix, yields the multiplicative identity for a matrix A, its inverse is A-1. And A.A-1 = I, where I is denoted as the
identity matrix.
GEOMETRICAL INTERPRETATION: Refering from page 12 of my thesis, we see that for matrix A= , we
have the inverse A-1 = . The observations are:
Note that if det A = 1, then the area of the parallelogram remains unchanged.
METHODS TO SOLVE AN LPP
The various methods to solve an LPP are-
GRAPHICAL METHOD: The graphical method is used to optimize the two-variable linear
programming. If the problem has two decision variables, a graphical method is the best method to
find the optimal solution. In this method, the set of inequalities are subjected to constraints. Then
the inequalities are plotted in the XY plane. Once, all the inequalities are plotted in the XY graph,
the intersecting region will help to decide the feasible region. The feasible region will provide the
optimal solution as well as explains what all values our model can take.
ANALYTICAL METHOD: If we are given a LPP problem to maximize or minimize certain
quantities, we are provided with constraints, which are basically set of n-linear inequations with
m-variables which helps to determine and solve the LPP problem. The change the inequalities to
equations by introducing slack and surplus variables, and then find the m-linearly independent
equations when rank is m. We set all (n-m) non-basic variables to zero and by method of solving
simultaneous linear equations, we solve to get an solution.
SIMPLEX METHOD: The simplex is one of the most popular methods to solve an LPP. It is an
iterative process to get the feasible optimal solution. In this method, the value of the basic
variable keeps transforming to obtain the maximum value for the objective function.
TRANSITION FROM GRAPHICAL TO
ALGEBRAIC METHOD
First let us see some observations from both methods-
GRAPHICAL METHOD
Graph contains all constraints (including non-negativity restrictions) & solution space consists of
infinitely many numbers of feasible solutions.
We find the corner points, which are feasible solution, which are finite.
We use objective function to determine optimal b.f.s from among the choices.
ALGEBRAIC METHOD
We represent the solution space by m-equations in n-variables and restrict all variables to non-
negative values m<n. The system has infinitely many feasible solutions. We find b.f.s of the
equations, which are also finite.
We use objective function to determine optimal b.f.s from among the choices.
EXAMPLE TO EXPLAIN
Maximize z = 2x + 3y
subject to-
2x + y ≤ 4
x + 2y ≤ 5: x,y ≥ 0.
According to 2, the corner points are- (0 , 2.5); (1,2); (2,0); (0,0). These corner points are feasible
solution to the LPP, which are finite.
Similarly, holding x 1 = 0 and increasing x 2 takes us along x2 axis, with the objective function changing at a rate of dz/dx2 = - (z2 - c2). As seen above, if z j - cj ≥ 0
corresponding to all non-basic variables, then objective function attains its maximum value at v 1. Suppose z2 - c2 < 0 and we hold x 1=0 and increase x2 along x2
axis. We would like to move as far as possible along this edge; however, our motion is blocked by the hyperplane x 3 = 0, since x3 has been driven to zero and
moving any further in this direction would drive x 3 negative (Of course, if no such blocking hyperplane existed, then the
optimal solution value would have been unbounded). Furthermore, since (p-1) linearly independent hyperplanes were binding all
along the movement and we were blocked by a new hyperplane, we now have p linearly independent hyperplanes binding
at v2 , and so we are at another extreme point v 2 of the feasible region. At v 2, the non-basic variables are x 1 and x3 and
the remaining variables are basic. We have completed one step or iteration of simplex method. In this step, variable x 2 is
called entering variable since it entered the set of basic variables and the variable x 3 is called the blocking variable or
departing variable since it blocked the motion or left the set of basic variables at v 1.
Repeating this process at v 2, we would of course not like to enter x 3 into the basis since it will only take us back along the
reverse of the previous direction of motion. Assuming dz/dx1 = - (z1 - c1) > 0, holding x3 = 0 and increasing x 1 takes us along an
improving edge. Proceeding in this direction, more than one hyperplane blocks our motion. Suppose we choose arbitrarily one
of these, namely x 4=0 as the blocking hyperplane. Hence x 4 is the leaving variable and for current basis representing v 3, x3 and x4
are non-basic variables. Now, if we hold x 4 = 0 and move along a direction in which x 3 increases, we are blocked by the hyperplane x 5 = 0 before we begin to
move. That is, x5 leaves the basis giving x 4 and x5 as new non-basic variables, while we are still at the same vertex v 3. Such a step in which one changes from
one basis to an adjacent basis, both representing the same extreme point solution, is called a degenerate iteration. We may be stuck in an infinite loop at an
extreme point, performing a sequence of degenerate iterations that takes us in a closed loop of bases, all representing the same extreme point, with the value of
objective function remaining a constant. Such a phenomenon is called cycling.
With x4 and x5 as non-basic variables at v 3, holding x5 = 0 and increasing x 4 takes us along an improving, feasible edge direction and therefore results in a
nondegenerate iteration. The blocking variable that gets driven to zero is x 6 and the new non-basic variables are x 5 and x6. Observe that corresponding to this
basis, none of the p rays, defined by holding some ( p - 1) non-basic variables equal to zero and increasing the remaining non-basic variable, lead to an
improvement in the objective function value ; hence v 4 is the optimal extreme point. A path followed by the simplex algorithm along edges of the polyhedron
(from v1 to v2 to v3 to v4 in the above example) is called a simplex path.
DIAGRAM
APPLICATIONS OF LPP
Now, we see some realistic LPP models in which the definition of the variables and the
construction of the objective function and the constraints are not as straightforward as in
the case of the two-variable model. The areas covered by these applications include the
following:
1. Investment model
3. Workforce planning.
OBJECTIVE FUNCTION- The objective of this LPP is to maximize net return, the difference between interest revenue and lost bad debts. Interest revenue
is accrued on loans in good standing. when 10% of personal loans are lost to bad debt, the bank will receive interest on 90% of the loan—that it will receive
14% interest on 9x1 of the original loan x2. The same reasoning applies to the remaining four types of loans. Thus-
Total interest = 0.14×0.9x1 + 0.13×0.93x2 + 0.12×0.97x3 + 0.125×0.95x4 + 0.1×0.98x5 = 0.126
x1 + 0.1209 x2 + 0.1164 x3 + 0.11875 x4 + 0.098 x5
We also have- Bad debt = 0.1x1 + 0.07x2 + 0.03x3 + 0.05x4 + 0.02x5
⸫ The objective function combines interest revenue and bad debt as:
Maximize z = Total interest - Bad debt
= (0.126 x1 + .1209 x2 + .1164 x3 + 0.11875 x4 + 0.098 x5 ) - (0.1 x1 + 0.07 x2 + 0.03 x3 + 0.05 x4 + .02 x5)
= 0.026 x1 + 0.0509 x2 + 0.0864 x3 + 0.06875 x4 + 0.078 x5
The first deals with production scheduling to meet a single-period demand. We have
The second deals with the use of inventory in a multiperiod production system to meet
future demand
The third deals with the use of inventory and worker hiring/firing to “smooth”
production over a multiperiod planning horizon.
OBJECTIVE FUNCTION: The objective then is to maximize net profit, defined as-
Net profit = Total profit - Total penalty The total
profit is- 30x1 + 40x2 + 20x3 + 10x4.
To compute the total penalty, the demand constraints can be written as- x1 + s1 =
800; x2 + s2 = 750; x3 + s3 = 600; x4 + s4 = 500; xj,sj ≥ 0; for j = 1, 2, 3, 4. The new variable
sj represents the shortage in demand for product j, and the total penalty can be computed as- 15s1 + 20s2 + 10s3 + 8s4.
SOLUTION: The optimum solution is- z = $64,625, x1 = 800; x2 = 750; x3 = 387.5; x4 = 500; s1 = s2 = s4 = 0; s3 = 212.5. The solution
satisfies all the demand for both types of jackets and the gloves. A shortage of 213 (rounded up from 212.5) pairs of pants will result in a
penalty cost of 213 × $10 = $2130.
Multiple Period Production-Inventory Model
Acme Manufacturing Company has a contract to deliver 100, 250, 190, 140, 220, and 110 home windows over the next 6 months. Production cost
(labour, material, and utilities) per window varies by period and is estimated to be $50, $45, $55, $48, $52, and $50 over the next 6 months. To take
advantage of the fluctuations in manufacturing cost, Acme can produce more windows than needed in a given month and hold the extra units for
delivery in later months. This will incur a storage cost at the rate of $8 per window per month, assessed on end-of-month inventory. Develop a linear
program to determine the optimum production schedule.
DECISION VARIABLES: The variables of the problem include the monthly production amount and the end-of-month inventory. For i = 1, 2,
…….., 6, let- x i = Number of units produced in month i
Ii = Inventory units left at the end of month i
The system starts empty (I0 = 0). The objective is to minimize the total cost of
production and end-of-month inventory.
Total production cost = 50x1 + 45x2 + 55x3 + 48x4 + 52x5 + 50x6 Total inventory (storage) cost
= 8(I1 + I2 + I3 + I4 + I5 + I6)
Thus, the objective function is-
Minimize z = 50x1 + 45x2 + 55x3 + 48x4 + 52x5 + 50x6 + 8(I1 + I2 + I3 + I4 + I5 + I6)
For each period we have the following balance
equation: Beginning inventory + Production amount - Ending inventory =
Demand This is translated mathematically for the individual months as-
x1 – I1 = 100 (Month 1)
I1 + x2 – I2 = 250 (Month 2)
I2 + x3 – I3 = 190
(Month 3)
I3 + x4 – I4 = 140 (Month 4) I4
+ x5 – I5 = 220 (Month 5) I 5 + x6 =
110 (Month 6) xi ≥ 0, i = 1, 2, …., 6, Ii ≥ 0, i = 1, 2, ….., 5
SOLUTION: The optimum solution is summarized in the above figure 2. It shows that each month’s demand is satisfied from the same month’s
production, except for month 2, where the production quantity (= 440 units) covers the demand for both months 2 and 3. The total associated cost is
z = $49,980.
WORKFORCE PLANNING
BUS SCHEDULING MODEL
Progress City is studying the feasibility of introducing a mass-transit bus system to reduce in-city
driving. The study seeks the minimum number of buses that can handle the transportation needs.
After gathering necessary information, the city engineer noticed that the minimum number of
buses needed fluctuated with time of the day, and that the required number of buses could be
approximated by constant values over successive 4-hr intervals. Below figure summarizes the
engineer’s findings. To carry out the required daily maintenance, each bus can operate only 8
successive hours a day.
CONTINUED
We know that each bus will run for 8 consecutive hours, but we do not know when a shift should start. If we follow a normal three-shift
schedule (8:01 a.m. to 4:00 p.m., 4:01 p.m. to 12:00 midnight, and 12:01 a.m. to 8:00 a.m.) and assume that x1, x2, and x3 are the number
of buses starting in the first, second, and third shifts, we can see in Figure 1 that x1 ≥ 10, x2 ≥ 12, and x3 ≥ 8. The corresponding
minimum number of daily buses is x1 + x2 + x3 = 10 + 12 + 8 = 30. The given solution is acceptable only if the shifts must coincide with
the normal three-shift schedule. However, it may be advantageous to allow the optimization process to choose the “best” starting time
for a shift. A reasonable way to accomplish this goal is to allow a shift to start every 4 hr. The bottom of Figure 1 illustrates this idea with
overlapping 8-hr shifts starting at 12:01 a.m., 4:01 a.m., 8:01 a.m., 12:01 p.m., 4:01 p.m., and 8:01 p.m. Thus, the variables are defined
as- x1 = number of buses starting at 12:01
a.m. Time period No. of
x2 =buses
numberinofoperation
buses starting at 4:01
a.m. 12.01 A.M to 4.00 A.M x +
1 x = x 6 number of buses starting at 8:01
3
a.m. 4.01 A.M to 8.00 A.M x +
x4 = number
1 x 2 of buses starting at 12:01 p.m.
Solution: The optimal solution calls for scheduling 26 buses (compared with 30 buses when the three traditional shifts are used). The
schedule calls for x1 = 4 buses to start at 12:01 a.m., x2 = 10 at 4:01 a.m., x4 = 8 at 12:01 p.m., and x5 = 4 at 4:01 p.m. which means that
the solution is x1 = 2, x2 = 6, x3 = 4, x4 = 6, x5 = 6, and x6 = 2, with z = 26.
SOME OTHER EXAMPLES OF LPP
Manufacturing problem: Mainly faced by production companies, this type of problem
involves solving for making the maximum profit or minimum cost given various
constraints like labour, output units, and machine runtime
Diet problem: The main objective of this problem is to optimize for adequate nutrition
considering the requirements of the body and the costs involved
Transportation problem: This type of problem includes finding the right transportation
solutions given the constraints of cost and time
Resource allocation problem: This problem is concerned with managing the efficiency of
the project. The primary objective is to complete the maximum number of tasks, given the
constraints of man-hours and the types of resources available
Assignment problem: This problem is concerned with the assignment of some origins to
equal number of destinations in one-to-one basis such that the total assignment cost is
minimum.
THANK YOU!